00:00:00.000 Started by upstream project "autotest-per-patch" build number 132410 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.131 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.132 The recommended git tool is: git 00:00:00.132 using credential 00000000-0000-0000-0000-000000000002 00:00:00.135 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.206 Fetching changes from the remote Git repository 00:00:00.208 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.264 Using shallow fetch with depth 1 00:00:00.264 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.264 > git --version # timeout=10 00:00:00.311 > git --version # 'git version 2.39.2' 00:00:00.311 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.335 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.335 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.848 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.861 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.874 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.874 > git config core.sparsecheckout # timeout=10 00:00:07.885 > git read-tree -mu HEAD # timeout=10 00:00:07.903 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.925 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.925 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.034 [Pipeline] Start of Pipeline 00:00:08.046 [Pipeline] library 00:00:08.049 Loading library shm_lib@master 00:00:08.049 Library shm_lib@master is cached. Copying from home. 00:00:08.064 [Pipeline] node 00:00:08.072 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:08.075 [Pipeline] { 00:00:08.084 [Pipeline] catchError 00:00:08.085 [Pipeline] { 00:00:08.097 [Pipeline] wrap 00:00:08.106 [Pipeline] { 00:00:08.115 [Pipeline] stage 00:00:08.117 [Pipeline] { (Prologue) 00:00:08.137 [Pipeline] echo 00:00:08.138 Node: VM-host-WFP1 00:00:08.148 [Pipeline] cleanWs 00:00:08.158 [WS-CLEANUP] Deleting project workspace... 00:00:08.158 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.165 [WS-CLEANUP] done 00:00:08.374 [Pipeline] setCustomBuildProperty 00:00:08.455 [Pipeline] httpRequest 00:00:08.913 [Pipeline] echo 00:00:08.915 Sorcerer 10.211.164.20 is alive 00:00:08.922 [Pipeline] retry 00:00:08.923 [Pipeline] { 00:00:08.932 [Pipeline] httpRequest 00:00:08.935 HttpMethod: GET 00:00:08.936 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.937 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.945 Response Code: HTTP/1.1 200 OK 00:00:08.945 Success: Status code 200 is in the accepted range: 200,404 00:00:08.946 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.435 [Pipeline] } 00:00:15.454 [Pipeline] // retry 00:00:15.462 [Pipeline] sh 00:00:15.744 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.760 [Pipeline] httpRequest 00:00:16.205 [Pipeline] echo 00:00:16.208 Sorcerer 10.211.164.20 is alive 00:00:16.218 [Pipeline] retry 00:00:16.221 [Pipeline] { 00:00:16.238 [Pipeline] httpRequest 00:00:16.243 HttpMethod: GET 00:00:16.243 URL: http://10.211.164.20/packages/spdk_1981e6eeca201ec19c7d70102797d8ef3cab85cb.tar.gz 00:00:16.244 Sending request to url: http://10.211.164.20/packages/spdk_1981e6eeca201ec19c7d70102797d8ef3cab85cb.tar.gz 00:00:16.258 Response Code: HTTP/1.1 200 OK 00:00:16.259 Success: Status code 200 is in the accepted range: 200,404 00:00:16.259 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_1981e6eeca201ec19c7d70102797d8ef3cab85cb.tar.gz 00:01:05.963 [Pipeline] } 00:01:05.981 [Pipeline] // retry 00:01:05.989 [Pipeline] sh 00:01:06.273 + tar --no-same-owner -xf spdk_1981e6eeca201ec19c7d70102797d8ef3cab85cb.tar.gz 00:01:08.822 [Pipeline] sh 00:01:09.105 + git -C spdk log --oneline -n5 00:01:09.105 1981e6eec bdevperf: Add hide_metadata option 00:01:09.105 66a383faf bdevperf: Get metadata config by not bdev but bdev_desc 00:01:09.105 25916e30c bdevperf: Store the result of DIF type check into job structure 00:01:09.105 bd9804982 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:01:09.105 2e015e34f bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:01:09.124 [Pipeline] writeFile 00:01:09.138 [Pipeline] sh 00:01:09.420 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:09.432 [Pipeline] sh 00:01:09.713 + cat autorun-spdk.conf 00:01:09.713 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.713 SPDK_TEST_NVMF=1 00:01:09.713 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.713 SPDK_TEST_USDT=1 00:01:09.713 SPDK_TEST_NVMF_MDNS=1 00:01:09.713 SPDK_RUN_UBSAN=1 00:01:09.713 NET_TYPE=virt 00:01:09.713 SPDK_JSONRPC_GO_CLIENT=1 00:01:09.713 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.719 RUN_NIGHTLY=0 00:01:09.721 [Pipeline] } 00:01:09.736 [Pipeline] // stage 00:01:09.752 [Pipeline] stage 00:01:09.754 [Pipeline] { (Run VM) 00:01:09.767 [Pipeline] sh 00:01:10.049 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.049 + echo 'Start stage prepare_nvme.sh' 00:01:10.049 Start stage prepare_nvme.sh 00:01:10.049 + [[ -n 0 ]] 00:01:10.049 + disk_prefix=ex0 00:01:10.049 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:10.049 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:10.049 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:10.049 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.049 ++ SPDK_TEST_NVMF=1 00:01:10.049 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.049 ++ SPDK_TEST_USDT=1 00:01:10.049 ++ SPDK_TEST_NVMF_MDNS=1 00:01:10.049 ++ SPDK_RUN_UBSAN=1 00:01:10.049 ++ NET_TYPE=virt 00:01:10.049 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:10.049 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.049 ++ RUN_NIGHTLY=0 00:01:10.049 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:10.049 + nvme_files=() 00:01:10.049 + declare -A nvme_files 00:01:10.049 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.049 + nvme_files['nvme.img']=5G 00:01:10.049 + nvme_files['nvme-cmb.img']=5G 00:01:10.049 + nvme_files['nvme-multi0.img']=4G 00:01:10.049 + nvme_files['nvme-multi1.img']=4G 00:01:10.049 + nvme_files['nvme-multi2.img']=4G 00:01:10.049 + nvme_files['nvme-openstack.img']=8G 00:01:10.049 + nvme_files['nvme-zns.img']=5G 00:01:10.049 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.049 + (( SPDK_TEST_FTL == 1 )) 00:01:10.049 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.049 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.049 + for nvme in "${!nvme_files[@]}" 00:01:10.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:10.049 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.049 + for nvme in "${!nvme_files[@]}" 00:01:10.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:10.050 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.050 + for nvme in "${!nvme_files[@]}" 00:01:10.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:10.050 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.050 + for nvme in "${!nvme_files[@]}" 00:01:10.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:11.042 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.042 + for nvme in "${!nvme_files[@]}" 00:01:11.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:11.042 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.042 + for nvme in "${!nvme_files[@]}" 00:01:11.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:11.042 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.042 + for nvme in "${!nvme_files[@]}" 00:01:11.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:11.610 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.610 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:11.610 + echo 'End stage prepare_nvme.sh' 00:01:11.610 End stage prepare_nvme.sh 00:01:11.620 [Pipeline] sh 00:01:11.902 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.902 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:11.902 00:01:11.902 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:11.902 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:11.902 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:11.902 HELP=0 00:01:11.902 DRY_RUN=0 00:01:11.902 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:11.902 NVME_DISKS_TYPE=nvme,nvme, 00:01:11.902 NVME_AUTO_CREATE=0 00:01:11.902 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:11.902 NVME_CMB=,, 00:01:11.902 NVME_PMR=,, 00:01:11.902 NVME_ZNS=,, 00:01:11.902 NVME_MS=,, 00:01:11.902 NVME_FDP=,, 00:01:11.902 SPDK_VAGRANT_DISTRO=fedora39 00:01:11.902 SPDK_VAGRANT_VMCPU=10 00:01:11.902 SPDK_VAGRANT_VMRAM=12288 00:01:11.902 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.902 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:11.902 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.902 SPDK_OPENSTACK_NETWORK=0 00:01:11.902 VAGRANT_PACKAGE_BOX=0 00:01:11.902 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:11.902 FORCE_DISTRO=true 00:01:11.902 VAGRANT_BOX_VERSION= 00:01:11.902 EXTRA_VAGRANTFILES= 00:01:11.902 NIC_MODEL=e1000 00:01:11.902 00:01:11.902 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:11.902 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:15.188 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.092 ==> default: Creating image (snapshot of base box volume). 00:01:17.092 ==> default: Creating domain with the following settings... 00:01:17.092 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732115342_481c8b393d02316751b1 00:01:17.092 ==> default: -- Domain type: kvm 00:01:17.092 ==> default: -- Cpus: 10 00:01:17.092 ==> default: -- Feature: acpi 00:01:17.092 ==> default: -- Feature: apic 00:01:17.092 ==> default: -- Feature: pae 00:01:17.092 ==> default: -- Memory: 12288M 00:01:17.092 ==> default: -- Memory Backing: hugepages: 00:01:17.092 ==> default: -- Management MAC: 00:01:17.092 ==> default: -- Loader: 00:01:17.092 ==> default: -- Nvram: 00:01:17.092 ==> default: -- Base box: spdk/fedora39 00:01:17.092 ==> default: -- Storage pool: default 00:01:17.092 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732115342_481c8b393d02316751b1.img (20G) 00:01:17.092 ==> default: -- Volume Cache: default 00:01:17.092 ==> default: -- Kernel: 00:01:17.092 ==> default: -- Initrd: 00:01:17.092 ==> default: -- Graphics Type: vnc 00:01:17.093 ==> default: -- Graphics Port: -1 00:01:17.093 ==> default: -- Graphics IP: 127.0.0.1 00:01:17.093 ==> default: -- Graphics Password: Not defined 00:01:17.093 ==> default: -- Video Type: cirrus 00:01:17.093 ==> default: -- Video VRAM: 9216 00:01:17.093 ==> default: -- Sound Type: 00:01:17.093 ==> default: -- Keymap: en-us 00:01:17.093 ==> default: -- TPM Path: 00:01:17.093 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:17.093 ==> default: -- Command line args: 00:01:17.093 ==> default: -> value=-device, 00:01:17.093 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:17.093 ==> default: -> value=-drive, 00:01:17.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:17.093 ==> default: -> value=-device, 00:01:17.093 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.093 ==> default: -> value=-device, 00:01:17.093 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:17.093 ==> default: -> value=-drive, 00:01:17.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:17.093 ==> default: -> value=-device, 00:01:17.093 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.093 ==> default: -> value=-drive, 00:01:17.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:17.093 ==> default: -> value=-device, 00:01:17.093 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.093 ==> default: -> value=-drive, 00:01:17.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:17.093 ==> default: -> value=-device, 00:01:17.093 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.660 ==> default: Creating shared folders metadata... 00:01:17.660 ==> default: Starting domain. 00:01:20.947 ==> default: Waiting for domain to get an IP address... 00:01:39.070 ==> default: Waiting for SSH to become available... 00:01:39.070 ==> default: Configuring and enabling network interfaces... 00:01:44.342 default: SSH address: 192.168.121.58:22 00:01:44.342 default: SSH username: vagrant 00:01:44.342 default: SSH auth method: private key 00:01:46.883 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:56.864 ==> default: Mounting SSHFS shared folder... 00:01:58.240 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.240 ==> default: Checking Mount.. 00:02:00.144 ==> default: Folder Successfully Mounted! 00:02:00.144 ==> default: Running provisioner: file... 00:02:01.081 default: ~/.gitconfig => .gitconfig 00:02:01.650 00:02:01.650 SUCCESS! 00:02:01.650 00:02:01.650 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:01.650 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:01.650 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:01.650 00:02:01.659 [Pipeline] } 00:02:01.677 [Pipeline] // stage 00:02:01.688 [Pipeline] dir 00:02:01.689 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:01.691 [Pipeline] { 00:02:01.705 [Pipeline] catchError 00:02:01.707 [Pipeline] { 00:02:01.722 [Pipeline] sh 00:02:02.010 + vagrant ssh-config --host vagrant 00:02:02.010 + sed -ne /^Host/,$p 00:02:02.010 + tee ssh_conf 00:02:05.376 Host vagrant 00:02:05.376 HostName 192.168.121.58 00:02:05.376 User vagrant 00:02:05.376 Port 22 00:02:05.376 UserKnownHostsFile /dev/null 00:02:05.376 StrictHostKeyChecking no 00:02:05.376 PasswordAuthentication no 00:02:05.376 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:05.376 IdentitiesOnly yes 00:02:05.376 LogLevel FATAL 00:02:05.376 ForwardAgent yes 00:02:05.376 ForwardX11 yes 00:02:05.376 00:02:05.419 [Pipeline] withEnv 00:02:05.421 [Pipeline] { 00:02:05.435 [Pipeline] sh 00:02:05.716 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:05.716 source /etc/os-release 00:02:05.716 [[ -e /image.version ]] && img=$(< /image.version) 00:02:05.716 # Minimal, systemd-like check. 00:02:05.716 if [[ -e /.dockerenv ]]; then 00:02:05.716 # Clear garbage from the node's name: 00:02:05.716 # agt-er_autotest_547-896 -> autotest_547-896 00:02:05.716 # $HOSTNAME is the actual container id 00:02:05.716 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:05.716 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:05.716 # We can assume this is a mount from a host where container is running, 00:02:05.716 # so fetch its hostname to easily identify the target swarm worker. 00:02:05.716 container="$(< /etc/hostname) ($agent)" 00:02:05.716 else 00:02:05.716 # Fallback 00:02:05.716 container=$agent 00:02:05.716 fi 00:02:05.716 fi 00:02:05.716 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:05.716 00:02:05.988 [Pipeline] } 00:02:06.006 [Pipeline] // withEnv 00:02:06.015 [Pipeline] setCustomBuildProperty 00:02:06.032 [Pipeline] stage 00:02:06.035 [Pipeline] { (Tests) 00:02:06.052 [Pipeline] sh 00:02:06.334 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.606 [Pipeline] sh 00:02:06.888 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:07.162 [Pipeline] timeout 00:02:07.163 Timeout set to expire in 1 hr 0 min 00:02:07.165 [Pipeline] { 00:02:07.179 [Pipeline] sh 00:02:07.461 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:08.028 HEAD is now at 1981e6eec bdevperf: Add hide_metadata option 00:02:08.040 [Pipeline] sh 00:02:08.315 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:08.591 [Pipeline] sh 00:02:08.868 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:09.142 [Pipeline] sh 00:02:09.420 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:09.678 ++ readlink -f spdk_repo 00:02:09.678 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.678 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.678 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.678 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.678 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.678 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.678 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.678 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:09.678 + cd /home/vagrant/spdk_repo 00:02:09.678 + source /etc/os-release 00:02:09.678 ++ NAME='Fedora Linux' 00:02:09.678 ++ VERSION='39 (Cloud Edition)' 00:02:09.678 ++ ID=fedora 00:02:09.678 ++ VERSION_ID=39 00:02:09.678 ++ VERSION_CODENAME= 00:02:09.678 ++ PLATFORM_ID=platform:f39 00:02:09.678 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:09.678 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.678 ++ LOGO=fedora-logo-icon 00:02:09.678 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:09.678 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.678 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:09.678 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.678 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.678 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.678 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:09.678 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.678 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:09.678 ++ SUPPORT_END=2024-11-12 00:02:09.678 ++ VARIANT='Cloud Edition' 00:02:09.678 ++ VARIANT_ID=cloud 00:02:09.678 + uname -a 00:02:09.678 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:09.678 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:10.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:10.244 Hugepages 00:02:10.244 node hugesize free / total 00:02:10.244 node0 1048576kB 0 / 0 00:02:10.244 node0 2048kB 0 / 0 00:02:10.244 00:02:10.244 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.244 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:10.244 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:10.244 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:10.244 + rm -f /tmp/spdk-ld-path 00:02:10.244 + source autorun-spdk.conf 00:02:10.244 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.244 ++ SPDK_TEST_NVMF=1 00:02:10.244 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.245 ++ SPDK_TEST_USDT=1 00:02:10.245 ++ SPDK_TEST_NVMF_MDNS=1 00:02:10.245 ++ SPDK_RUN_UBSAN=1 00:02:10.245 ++ NET_TYPE=virt 00:02:10.245 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:10.245 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.245 ++ RUN_NIGHTLY=0 00:02:10.245 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.245 + [[ -n '' ]] 00:02:10.245 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.245 + for M in /var/spdk/build-*-manifest.txt 00:02:10.245 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.245 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.245 + for M in /var/spdk/build-*-manifest.txt 00:02:10.245 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.245 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.245 + for M in /var/spdk/build-*-manifest.txt 00:02:10.245 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.245 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.245 ++ uname 00:02:10.245 + [[ Linux == \L\i\n\u\x ]] 00:02:10.245 + sudo dmesg -T 00:02:10.503 + sudo dmesg --clear 00:02:10.503 + dmesg_pid=5214 00:02:10.503 + [[ Fedora Linux == FreeBSD ]] 00:02:10.503 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.503 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.503 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.503 + sudo dmesg -Tw 00:02:10.503 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.503 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.503 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.504 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.504 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.504 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.504 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.504 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.504 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.504 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.504 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.504 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.504 15:09:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:10.504 15:09:56 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.504 15:09:56 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:10.504 15:09:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:10.504 15:09:56 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.504 15:09:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:10.504 15:09:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.504 15:09:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.504 15:09:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.504 15:09:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.504 15:09:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.504 15:09:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.504 15:09:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.504 15:09:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.504 15:09:56 -- paths/export.sh@5 -- $ export PATH 00:02:10.504 15:09:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.504 15:09:56 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.504 15:09:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:10.762 15:09:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732115396.XXXXXX 00:02:10.762 15:09:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732115396.VtAynG 00:02:10.762 15:09:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:10.762 15:09:56 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:10.762 15:09:56 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:10.762 15:09:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.762 15:09:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.762 15:09:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:10.762 15:09:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:10.762 15:09:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.762 15:09:57 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:10.762 15:09:57 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:10.762 15:09:57 -- pm/common@17 -- $ local monitor 00:02:10.762 15:09:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.762 15:09:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.762 15:09:57 -- pm/common@25 -- $ sleep 1 00:02:10.762 15:09:57 -- pm/common@21 -- $ date +%s 00:02:10.762 15:09:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732115397 00:02:10.762 15:09:57 -- pm/common@21 -- $ date +%s 00:02:10.762 15:09:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732115397 00:02:10.762 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732115397_collect-cpu-load.pm.log 00:02:10.762 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732115397_collect-vmstat.pm.log 00:02:11.698 15:09:58 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:11.698 15:09:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.698 15:09:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.698 15:09:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.698 15:09:58 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.698 Wed Nov 20 03:09:58 PM UTC 2024 00:02:11.698 15:09:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.698 v25.01-pre-239-g1981e6eec 00:02:11.698 15:09:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.698 15:09:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.698 15:09:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.698 15:09:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.698 15:09:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.698 15:09:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.698 ************************************ 00:02:11.698 START TEST ubsan 00:02:11.698 ************************************ 00:02:11.698 using ubsan 00:02:11.698 15:09:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:11.698 00:02:11.698 real 0m0.000s 00:02:11.698 user 0m0.000s 00:02:11.698 sys 0m0.000s 00:02:11.698 ************************************ 00:02:11.698 END TEST ubsan 00:02:11.698 ************************************ 00:02:11.698 15:09:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:11.698 15:09:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.698 15:09:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.698 15:09:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.698 15:09:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.698 15:09:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.698 15:09:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.698 15:09:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.698 15:09:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.698 15:09:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.698 15:09:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:11.956 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:11.956 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.544 Using 'verbs' RDMA provider 00:02:31.567 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:46.454 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:46.454 go version go1.21.1 linux/amd64 00:02:46.454 Creating mk/config.mk...done. 00:02:46.454 Creating mk/cc.flags.mk...done. 00:02:46.454 Type 'make' to build. 00:02:46.454 15:10:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:46.454 15:10:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:46.454 15:10:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:46.454 15:10:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.454 ************************************ 00:02:46.454 START TEST make 00:02:46.454 ************************************ 00:02:46.454 15:10:32 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:46.713 make[1]: Nothing to be done for 'all'. 00:03:04.859 The Meson build system 00:03:04.859 Version: 1.5.0 00:03:04.859 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:04.859 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:04.859 Build type: native build 00:03:04.859 Program cat found: YES (/usr/bin/cat) 00:03:04.859 Project name: DPDK 00:03:04.859 Project version: 24.03.0 00:03:04.859 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:04.859 C linker for the host machine: cc ld.bfd 2.40-14 00:03:04.859 Host machine cpu family: x86_64 00:03:04.859 Host machine cpu: x86_64 00:03:04.859 Message: ## Building in Developer Mode ## 00:03:04.859 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:04.859 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:04.859 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:04.859 Program python3 found: YES (/usr/bin/python3) 00:03:04.859 Program cat found: YES (/usr/bin/cat) 00:03:04.859 Compiler for C supports arguments -march=native: YES 00:03:04.859 Checking for size of "void *" : 8 00:03:04.859 Checking for size of "void *" : 8 (cached) 00:03:04.859 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:04.859 Library m found: YES 00:03:04.859 Library numa found: YES 00:03:04.859 Has header "numaif.h" : YES 00:03:04.859 Library fdt found: NO 00:03:04.859 Library execinfo found: NO 00:03:04.859 Has header "execinfo.h" : YES 00:03:04.860 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:04.860 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:04.860 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:04.860 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:04.860 Run-time dependency openssl found: YES 3.1.1 00:03:04.860 Run-time dependency libpcap found: YES 1.10.4 00:03:04.860 Has header "pcap.h" with dependency libpcap: YES 00:03:04.860 Compiler for C supports arguments -Wcast-qual: YES 00:03:04.860 Compiler for C supports arguments -Wdeprecated: YES 00:03:04.860 Compiler for C supports arguments -Wformat: YES 00:03:04.860 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:04.860 Compiler for C supports arguments -Wformat-security: NO 00:03:04.860 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:04.860 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:04.860 Compiler for C supports arguments -Wnested-externs: YES 00:03:04.860 Compiler for C supports arguments -Wold-style-definition: YES 00:03:04.860 Compiler for C supports arguments -Wpointer-arith: YES 00:03:04.860 Compiler for C supports arguments -Wsign-compare: YES 00:03:04.860 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:04.860 Compiler for C supports arguments -Wundef: YES 00:03:04.860 Compiler for C supports arguments -Wwrite-strings: YES 00:03:04.860 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:04.860 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:04.860 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:04.860 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:04.860 Program objdump found: YES (/usr/bin/objdump) 00:03:04.860 Compiler for C supports arguments -mavx512f: YES 00:03:04.860 Checking if "AVX512 checking" compiles: YES 00:03:04.860 Fetching value of define "__SSE4_2__" : 1 00:03:04.860 Fetching value of define "__AES__" : 1 00:03:04.860 Fetching value of define "__AVX__" : 1 00:03:04.860 Fetching value of define "__AVX2__" : 1 00:03:04.860 Fetching value of define "__AVX512BW__" : 1 00:03:04.860 Fetching value of define "__AVX512CD__" : 1 00:03:04.860 Fetching value of define "__AVX512DQ__" : 1 00:03:04.860 Fetching value of define "__AVX512F__" : 1 00:03:04.860 Fetching value of define "__AVX512VL__" : 1 00:03:04.860 Fetching value of define "__PCLMUL__" : 1 00:03:04.860 Fetching value of define "__RDRND__" : 1 00:03:04.860 Fetching value of define "__RDSEED__" : 1 00:03:04.860 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:04.860 Fetching value of define "__znver1__" : (undefined) 00:03:04.860 Fetching value of define "__znver2__" : (undefined) 00:03:04.860 Fetching value of define "__znver3__" : (undefined) 00:03:04.860 Fetching value of define "__znver4__" : (undefined) 00:03:04.860 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:04.860 Message: lib/log: Defining dependency "log" 00:03:04.860 Message: lib/kvargs: Defining dependency "kvargs" 00:03:04.860 Message: lib/telemetry: Defining dependency "telemetry" 00:03:04.860 Checking for function "getentropy" : NO 00:03:04.860 Message: lib/eal: Defining dependency "eal" 00:03:04.860 Message: lib/ring: Defining dependency "ring" 00:03:04.860 Message: lib/rcu: Defining dependency "rcu" 00:03:04.860 Message: lib/mempool: Defining dependency "mempool" 00:03:04.860 Message: lib/mbuf: Defining dependency "mbuf" 00:03:04.860 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:04.860 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:04.860 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:04.860 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:04.860 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:04.860 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:04.860 Compiler for C supports arguments -mpclmul: YES 00:03:04.860 Compiler for C supports arguments -maes: YES 00:03:04.860 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:04.860 Compiler for C supports arguments -mavx512bw: YES 00:03:04.860 Compiler for C supports arguments -mavx512dq: YES 00:03:04.860 Compiler for C supports arguments -mavx512vl: YES 00:03:04.860 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:04.860 Compiler for C supports arguments -mavx2: YES 00:03:04.860 Compiler for C supports arguments -mavx: YES 00:03:04.860 Message: lib/net: Defining dependency "net" 00:03:04.860 Message: lib/meter: Defining dependency "meter" 00:03:04.860 Message: lib/ethdev: Defining dependency "ethdev" 00:03:04.860 Message: lib/pci: Defining dependency "pci" 00:03:04.860 Message: lib/cmdline: Defining dependency "cmdline" 00:03:04.860 Message: lib/hash: Defining dependency "hash" 00:03:04.860 Message: lib/timer: Defining dependency "timer" 00:03:04.860 Message: lib/compressdev: Defining dependency "compressdev" 00:03:04.860 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:04.860 Message: lib/dmadev: Defining dependency "dmadev" 00:03:04.860 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:04.860 Message: lib/power: Defining dependency "power" 00:03:04.860 Message: lib/reorder: Defining dependency "reorder" 00:03:04.860 Message: lib/security: Defining dependency "security" 00:03:04.860 Has header "linux/userfaultfd.h" : YES 00:03:04.860 Has header "linux/vduse.h" : YES 00:03:04.860 Message: lib/vhost: Defining dependency "vhost" 00:03:04.860 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:04.860 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:04.860 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:04.860 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:04.860 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:04.860 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:04.860 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:04.860 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:04.860 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:04.860 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:04.860 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:04.860 Configuring doxy-api-html.conf using configuration 00:03:04.860 Configuring doxy-api-man.conf using configuration 00:03:04.860 Program mandb found: YES (/usr/bin/mandb) 00:03:04.860 Program sphinx-build found: NO 00:03:04.860 Configuring rte_build_config.h using configuration 00:03:04.860 Message: 00:03:04.860 ================= 00:03:04.860 Applications Enabled 00:03:04.860 ================= 00:03:04.860 00:03:04.860 apps: 00:03:04.860 00:03:04.860 00:03:04.860 Message: 00:03:04.860 ================= 00:03:04.860 Libraries Enabled 00:03:04.860 ================= 00:03:04.860 00:03:04.860 libs: 00:03:04.860 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:04.860 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:04.860 cryptodev, dmadev, power, reorder, security, vhost, 00:03:04.860 00:03:04.860 Message: 00:03:04.860 =============== 00:03:04.860 Drivers Enabled 00:03:04.860 =============== 00:03:04.860 00:03:04.860 common: 00:03:04.860 00:03:04.860 bus: 00:03:04.860 pci, vdev, 00:03:04.860 mempool: 00:03:04.860 ring, 00:03:04.860 dma: 00:03:04.860 00:03:04.860 net: 00:03:04.860 00:03:04.860 crypto: 00:03:04.860 00:03:04.860 compress: 00:03:04.860 00:03:04.860 vdpa: 00:03:04.860 00:03:04.860 00:03:04.860 Message: 00:03:04.860 ================= 00:03:04.860 Content Skipped 00:03:04.860 ================= 00:03:04.860 00:03:04.860 apps: 00:03:04.860 dumpcap: explicitly disabled via build config 00:03:04.860 graph: explicitly disabled via build config 00:03:04.860 pdump: explicitly disabled via build config 00:03:04.860 proc-info: explicitly disabled via build config 00:03:04.860 test-acl: explicitly disabled via build config 00:03:04.860 test-bbdev: explicitly disabled via build config 00:03:04.860 test-cmdline: explicitly disabled via build config 00:03:04.860 test-compress-perf: explicitly disabled via build config 00:03:04.860 test-crypto-perf: explicitly disabled via build config 00:03:04.860 test-dma-perf: explicitly disabled via build config 00:03:04.860 test-eventdev: explicitly disabled via build config 00:03:04.860 test-fib: explicitly disabled via build config 00:03:04.860 test-flow-perf: explicitly disabled via build config 00:03:04.860 test-gpudev: explicitly disabled via build config 00:03:04.860 test-mldev: explicitly disabled via build config 00:03:04.860 test-pipeline: explicitly disabled via build config 00:03:04.860 test-pmd: explicitly disabled via build config 00:03:04.860 test-regex: explicitly disabled via build config 00:03:04.860 test-sad: explicitly disabled via build config 00:03:04.860 test-security-perf: explicitly disabled via build config 00:03:04.860 00:03:04.860 libs: 00:03:04.860 argparse: explicitly disabled via build config 00:03:04.860 metrics: explicitly disabled via build config 00:03:04.860 acl: explicitly disabled via build config 00:03:04.860 bbdev: explicitly disabled via build config 00:03:04.860 bitratestats: explicitly disabled via build config 00:03:04.860 bpf: explicitly disabled via build config 00:03:04.860 cfgfile: explicitly disabled via build config 00:03:04.860 distributor: explicitly disabled via build config 00:03:04.860 efd: explicitly disabled via build config 00:03:04.860 eventdev: explicitly disabled via build config 00:03:04.860 dispatcher: explicitly disabled via build config 00:03:04.860 gpudev: explicitly disabled via build config 00:03:04.860 gro: explicitly disabled via build config 00:03:04.860 gso: explicitly disabled via build config 00:03:04.860 ip_frag: explicitly disabled via build config 00:03:04.860 jobstats: explicitly disabled via build config 00:03:04.860 latencystats: explicitly disabled via build config 00:03:04.860 lpm: explicitly disabled via build config 00:03:04.860 member: explicitly disabled via build config 00:03:04.860 pcapng: explicitly disabled via build config 00:03:04.860 rawdev: explicitly disabled via build config 00:03:04.860 regexdev: explicitly disabled via build config 00:03:04.860 mldev: explicitly disabled via build config 00:03:04.860 rib: explicitly disabled via build config 00:03:04.860 sched: explicitly disabled via build config 00:03:04.861 stack: explicitly disabled via build config 00:03:04.861 ipsec: explicitly disabled via build config 00:03:04.861 pdcp: explicitly disabled via build config 00:03:04.861 fib: explicitly disabled via build config 00:03:04.861 port: explicitly disabled via build config 00:03:04.861 pdump: explicitly disabled via build config 00:03:04.861 table: explicitly disabled via build config 00:03:04.861 pipeline: explicitly disabled via build config 00:03:04.861 graph: explicitly disabled via build config 00:03:04.861 node: explicitly disabled via build config 00:03:04.861 00:03:04.861 drivers: 00:03:04.861 common/cpt: not in enabled drivers build config 00:03:04.861 common/dpaax: not in enabled drivers build config 00:03:04.861 common/iavf: not in enabled drivers build config 00:03:04.861 common/idpf: not in enabled drivers build config 00:03:04.861 common/ionic: not in enabled drivers build config 00:03:04.861 common/mvep: not in enabled drivers build config 00:03:04.861 common/octeontx: not in enabled drivers build config 00:03:04.861 bus/auxiliary: not in enabled drivers build config 00:03:04.861 bus/cdx: not in enabled drivers build config 00:03:04.861 bus/dpaa: not in enabled drivers build config 00:03:04.861 bus/fslmc: not in enabled drivers build config 00:03:04.861 bus/ifpga: not in enabled drivers build config 00:03:04.861 bus/platform: not in enabled drivers build config 00:03:04.861 bus/uacce: not in enabled drivers build config 00:03:04.861 bus/vmbus: not in enabled drivers build config 00:03:04.861 common/cnxk: not in enabled drivers build config 00:03:04.861 common/mlx5: not in enabled drivers build config 00:03:04.861 common/nfp: not in enabled drivers build config 00:03:04.861 common/nitrox: not in enabled drivers build config 00:03:04.861 common/qat: not in enabled drivers build config 00:03:04.861 common/sfc_efx: not in enabled drivers build config 00:03:04.861 mempool/bucket: not in enabled drivers build config 00:03:04.861 mempool/cnxk: not in enabled drivers build config 00:03:04.861 mempool/dpaa: not in enabled drivers build config 00:03:04.861 mempool/dpaa2: not in enabled drivers build config 00:03:04.861 mempool/octeontx: not in enabled drivers build config 00:03:04.861 mempool/stack: not in enabled drivers build config 00:03:04.861 dma/cnxk: not in enabled drivers build config 00:03:04.861 dma/dpaa: not in enabled drivers build config 00:03:04.861 dma/dpaa2: not in enabled drivers build config 00:03:04.861 dma/hisilicon: not in enabled drivers build config 00:03:04.861 dma/idxd: not in enabled drivers build config 00:03:04.861 dma/ioat: not in enabled drivers build config 00:03:04.861 dma/skeleton: not in enabled drivers build config 00:03:04.861 net/af_packet: not in enabled drivers build config 00:03:04.861 net/af_xdp: not in enabled drivers build config 00:03:04.861 net/ark: not in enabled drivers build config 00:03:04.861 net/atlantic: not in enabled drivers build config 00:03:04.861 net/avp: not in enabled drivers build config 00:03:04.861 net/axgbe: not in enabled drivers build config 00:03:04.861 net/bnx2x: not in enabled drivers build config 00:03:04.861 net/bnxt: not in enabled drivers build config 00:03:04.861 net/bonding: not in enabled drivers build config 00:03:04.861 net/cnxk: not in enabled drivers build config 00:03:04.861 net/cpfl: not in enabled drivers build config 00:03:04.861 net/cxgbe: not in enabled drivers build config 00:03:04.861 net/dpaa: not in enabled drivers build config 00:03:04.861 net/dpaa2: not in enabled drivers build config 00:03:04.861 net/e1000: not in enabled drivers build config 00:03:04.861 net/ena: not in enabled drivers build config 00:03:04.861 net/enetc: not in enabled drivers build config 00:03:04.861 net/enetfec: not in enabled drivers build config 00:03:04.861 net/enic: not in enabled drivers build config 00:03:04.861 net/failsafe: not in enabled drivers build config 00:03:04.861 net/fm10k: not in enabled drivers build config 00:03:04.861 net/gve: not in enabled drivers build config 00:03:04.861 net/hinic: not in enabled drivers build config 00:03:04.861 net/hns3: not in enabled drivers build config 00:03:04.861 net/i40e: not in enabled drivers build config 00:03:04.861 net/iavf: not in enabled drivers build config 00:03:04.861 net/ice: not in enabled drivers build config 00:03:04.861 net/idpf: not in enabled drivers build config 00:03:04.861 net/igc: not in enabled drivers build config 00:03:04.861 net/ionic: not in enabled drivers build config 00:03:04.861 net/ipn3ke: not in enabled drivers build config 00:03:04.861 net/ixgbe: not in enabled drivers build config 00:03:04.861 net/mana: not in enabled drivers build config 00:03:04.861 net/memif: not in enabled drivers build config 00:03:04.861 net/mlx4: not in enabled drivers build config 00:03:04.861 net/mlx5: not in enabled drivers build config 00:03:04.861 net/mvneta: not in enabled drivers build config 00:03:04.861 net/mvpp2: not in enabled drivers build config 00:03:04.861 net/netvsc: not in enabled drivers build config 00:03:04.861 net/nfb: not in enabled drivers build config 00:03:04.861 net/nfp: not in enabled drivers build config 00:03:04.861 net/ngbe: not in enabled drivers build config 00:03:04.861 net/null: not in enabled drivers build config 00:03:04.861 net/octeontx: not in enabled drivers build config 00:03:04.861 net/octeon_ep: not in enabled drivers build config 00:03:04.861 net/pcap: not in enabled drivers build config 00:03:04.861 net/pfe: not in enabled drivers build config 00:03:04.861 net/qede: not in enabled drivers build config 00:03:04.861 net/ring: not in enabled drivers build config 00:03:04.861 net/sfc: not in enabled drivers build config 00:03:04.861 net/softnic: not in enabled drivers build config 00:03:04.861 net/tap: not in enabled drivers build config 00:03:04.861 net/thunderx: not in enabled drivers build config 00:03:04.861 net/txgbe: not in enabled drivers build config 00:03:04.861 net/vdev_netvsc: not in enabled drivers build config 00:03:04.861 net/vhost: not in enabled drivers build config 00:03:04.861 net/virtio: not in enabled drivers build config 00:03:04.861 net/vmxnet3: not in enabled drivers build config 00:03:04.861 raw/*: missing internal dependency, "rawdev" 00:03:04.861 crypto/armv8: not in enabled drivers build config 00:03:04.861 crypto/bcmfs: not in enabled drivers build config 00:03:04.861 crypto/caam_jr: not in enabled drivers build config 00:03:04.861 crypto/ccp: not in enabled drivers build config 00:03:04.861 crypto/cnxk: not in enabled drivers build config 00:03:04.861 crypto/dpaa_sec: not in enabled drivers build config 00:03:04.861 crypto/dpaa2_sec: not in enabled drivers build config 00:03:04.861 crypto/ipsec_mb: not in enabled drivers build config 00:03:04.861 crypto/mlx5: not in enabled drivers build config 00:03:04.861 crypto/mvsam: not in enabled drivers build config 00:03:04.861 crypto/nitrox: not in enabled drivers build config 00:03:04.861 crypto/null: not in enabled drivers build config 00:03:04.861 crypto/octeontx: not in enabled drivers build config 00:03:04.861 crypto/openssl: not in enabled drivers build config 00:03:04.861 crypto/scheduler: not in enabled drivers build config 00:03:04.861 crypto/uadk: not in enabled drivers build config 00:03:04.861 crypto/virtio: not in enabled drivers build config 00:03:04.861 compress/isal: not in enabled drivers build config 00:03:04.861 compress/mlx5: not in enabled drivers build config 00:03:04.861 compress/nitrox: not in enabled drivers build config 00:03:04.861 compress/octeontx: not in enabled drivers build config 00:03:04.861 compress/zlib: not in enabled drivers build config 00:03:04.861 regex/*: missing internal dependency, "regexdev" 00:03:04.861 ml/*: missing internal dependency, "mldev" 00:03:04.861 vdpa/ifc: not in enabled drivers build config 00:03:04.861 vdpa/mlx5: not in enabled drivers build config 00:03:04.861 vdpa/nfp: not in enabled drivers build config 00:03:04.861 vdpa/sfc: not in enabled drivers build config 00:03:04.861 event/*: missing internal dependency, "eventdev" 00:03:04.861 baseband/*: missing internal dependency, "bbdev" 00:03:04.861 gpu/*: missing internal dependency, "gpudev" 00:03:04.861 00:03:04.861 00:03:04.861 Build targets in project: 85 00:03:04.861 00:03:04.861 DPDK 24.03.0 00:03:04.861 00:03:04.861 User defined options 00:03:04.861 buildtype : debug 00:03:04.861 default_library : shared 00:03:04.861 libdir : lib 00:03:04.861 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:04.861 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:04.861 c_link_args : 00:03:04.861 cpu_instruction_set: native 00:03:04.861 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:04.861 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:04.861 enable_docs : false 00:03:04.861 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:04.861 enable_kmods : false 00:03:04.861 max_lcores : 128 00:03:04.861 tests : false 00:03:04.861 00:03:04.861 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:04.861 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:04.861 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:04.861 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:04.861 [3/268] Linking static target lib/librte_kvargs.a 00:03:04.861 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:04.861 [5/268] Linking static target lib/librte_log.a 00:03:04.861 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:04.861 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.861 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.861 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.861 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:04.861 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.861 [12/268] Linking static target lib/librte_telemetry.a 00:03:04.861 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.861 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.861 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:04.862 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.862 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.862 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:04.862 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.862 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:04.862 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:04.862 [22/268] Linking target lib/librte_log.so.24.1 00:03:05.120 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:05.120 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:05.379 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:05.379 [26/268] Linking target lib/librte_kvargs.so.24.1 00:03:05.379 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.638 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:05.638 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:05.638 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:05.638 [31/268] Linking target lib/librte_telemetry.so.24.1 00:03:05.638 [32/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:05.638 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:05.897 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:05.897 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:05.897 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:05.897 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:06.230 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:06.230 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:06.230 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:06.489 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:06.489 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:06.489 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:06.748 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:06.748 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:06.748 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.748 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:07.007 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:07.007 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:07.007 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:07.266 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:07.266 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:07.526 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:07.526 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:07.526 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:07.526 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:07.785 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:07.785 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.044 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.044 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.044 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.044 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.044 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.302 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.302 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.302 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.561 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.561 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.561 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.819 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:09.078 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.078 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.078 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:09.078 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:09.078 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:09.078 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:09.078 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:09.078 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:09.078 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:09.078 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.078 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:09.336 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.595 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.595 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.595 [85/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.595 [86/268] Linking static target lib/librte_rcu.a 00:03:09.595 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.595 [88/268] Linking static target lib/librte_ring.a 00:03:09.595 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.854 [90/268] Linking static target lib/librte_eal.a 00:03:09.854 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.854 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:10.119 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:10.119 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:10.119 [95/268] Linking static target lib/librte_mempool.a 00:03:10.119 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.119 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:10.388 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.388 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.388 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.388 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.388 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.388 [103/268] Linking static target lib/librte_mbuf.a 00:03:10.651 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.651 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.651 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.651 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.910 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.910 [109/268] Linking static target lib/librte_net.a 00:03:11.168 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:11.168 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:11.168 [112/268] Linking static target lib/librte_meter.a 00:03:11.427 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:11.427 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:11.686 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.686 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.686 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:11.944 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.944 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.944 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:12.202 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:12.460 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:12.460 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:12.719 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:12.719 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:12.719 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:12.978 [127/268] Linking static target lib/librte_pci.a 00:03:12.978 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:12.978 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:12.978 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:12.978 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:12.978 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:12.978 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:12.978 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:12.978 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:12.978 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:13.235 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:13.236 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:13.236 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.236 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:13.236 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:13.236 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:13.236 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:13.236 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:13.236 [145/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.495 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:13.495 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:13.495 [148/268] Linking static target lib/librte_cmdline.a 00:03:13.754 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:13.754 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:14.012 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:14.012 [152/268] Linking static target lib/librte_timer.a 00:03:14.012 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:14.271 [154/268] Linking static target lib/librte_ethdev.a 00:03:14.271 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:14.271 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:14.271 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:14.529 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:14.529 [159/268] Linking static target lib/librte_hash.a 00:03:14.529 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:14.831 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.831 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:14.831 [163/268] Linking static target lib/librte_compressdev.a 00:03:14.831 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:15.090 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:15.349 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:15.349 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:15.349 [168/268] Linking static target lib/librte_dmadev.a 00:03:15.349 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:15.609 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:15.609 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:15.609 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:15.868 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.127 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.127 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:16.127 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:16.127 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:16.127 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.385 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:16.385 [180/268] Linking static target lib/librte_cryptodev.a 00:03:16.385 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:16.642 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:16.642 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.642 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:16.901 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:16.901 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:16.901 [187/268] Linking static target lib/librte_reorder.a 00:03:16.901 [188/268] Linking static target lib/librte_power.a 00:03:17.467 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:17.467 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:17.467 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:17.467 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:17.467 [193/268] Linking static target lib/librte_security.a 00:03:17.725 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.292 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:18.292 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:18.551 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:18.551 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.551 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.551 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:18.810 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:19.069 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:19.350 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:19.351 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:19.609 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:19.609 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:19.883 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.883 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.883 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.883 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:19.883 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.453 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.453 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.453 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.453 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.453 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.453 [217/268] Linking static target drivers/librte_bus_pci.a 00:03:20.453 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.453 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:20.453 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.453 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:21.021 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:21.021 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.021 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.021 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:21.021 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.021 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:21.021 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:21.280 [229/268] Linking static target lib/librte_vhost.a 00:03:21.280 [230/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.211 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.187 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.187 [233/268] Linking target lib/librte_eal.so.24.1 00:03:24.445 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:24.445 [235/268] Linking target lib/librte_meter.so.24.1 00:03:24.445 [236/268] Linking target lib/librte_timer.so.24.1 00:03:24.445 [237/268] Linking target lib/librte_ring.so.24.1 00:03:24.445 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:24.445 [239/268] Linking target lib/librte_pci.so.24.1 00:03:24.445 [240/268] Linking target lib/librte_dmadev.so.24.1 00:03:24.704 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:24.704 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:24.704 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:24.704 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:24.704 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:24.704 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.704 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:24.704 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:24.704 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:24.962 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:24.962 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:24.962 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:24.962 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:25.221 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:25.221 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:25.221 [256/268] Linking target lib/librte_net.so.24.1 00:03:25.221 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:25.221 [258/268] Linking target lib/librte_reorder.so.24.1 00:03:25.481 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:25.481 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:25.481 [261/268] Linking target lib/librte_hash.so.24.1 00:03:25.481 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:25.481 [263/268] Linking target lib/librte_security.so.24.1 00:03:25.481 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:25.740 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:25.740 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:25.740 [267/268] Linking target lib/librte_power.so.24.1 00:03:25.740 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:25.740 INFO: autodetecting backend as ninja 00:03:25.740 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:52.291 CC lib/log/log.o 00:03:52.291 CC lib/log/log_flags.o 00:03:52.291 CC lib/log/log_deprecated.o 00:03:52.291 CC lib/ut/ut.o 00:03:52.291 CC lib/ut_mock/mock.o 00:03:52.291 LIB libspdk_log.a 00:03:52.291 LIB libspdk_ut.a 00:03:52.291 LIB libspdk_ut_mock.a 00:03:52.291 SO libspdk_ut.so.2.0 00:03:52.291 SO libspdk_log.so.7.1 00:03:52.291 SO libspdk_ut_mock.so.6.0 00:03:52.291 SYMLINK libspdk_ut.so 00:03:52.291 SYMLINK libspdk_log.so 00:03:52.291 SYMLINK libspdk_ut_mock.so 00:03:52.607 CC lib/ioat/ioat.o 00:03:52.607 CC lib/dma/dma.o 00:03:52.607 CC lib/util/base64.o 00:03:52.607 CC lib/util/bit_array.o 00:03:52.607 CC lib/util/crc16.o 00:03:52.607 CC lib/util/crc32.o 00:03:52.607 CC lib/util/cpuset.o 00:03:52.607 CC lib/util/crc32c.o 00:03:52.607 CXX lib/trace_parser/trace.o 00:03:52.877 CC lib/vfio_user/host/vfio_user_pci.o 00:03:52.877 CC lib/vfio_user/host/vfio_user.o 00:03:52.877 CC lib/util/crc32_ieee.o 00:03:52.877 CC lib/util/crc64.o 00:03:52.877 CC lib/util/dif.o 00:03:52.877 CC lib/util/fd.o 00:03:52.877 LIB libspdk_dma.a 00:03:52.877 CC lib/util/fd_group.o 00:03:52.877 SO libspdk_dma.so.5.0 00:03:52.877 LIB libspdk_ioat.a 00:03:52.877 CC lib/util/file.o 00:03:53.136 CC lib/util/hexlify.o 00:03:53.136 SO libspdk_ioat.so.7.0 00:03:53.136 SYMLINK libspdk_dma.so 00:03:53.136 CC lib/util/iov.o 00:03:53.136 SYMLINK libspdk_ioat.so 00:03:53.136 CC lib/util/math.o 00:03:53.136 CC lib/util/net.o 00:03:53.136 CC lib/util/pipe.o 00:03:53.136 LIB libspdk_vfio_user.a 00:03:53.136 SO libspdk_vfio_user.so.5.0 00:03:53.136 CC lib/util/strerror_tls.o 00:03:53.136 CC lib/util/string.o 00:03:53.136 SYMLINK libspdk_vfio_user.so 00:03:53.136 CC lib/util/uuid.o 00:03:53.136 CC lib/util/xor.o 00:03:53.136 CC lib/util/zipf.o 00:03:53.396 CC lib/util/md5.o 00:03:53.655 LIB libspdk_util.a 00:03:53.655 SO libspdk_util.so.10.1 00:03:53.655 LIB libspdk_trace_parser.a 00:03:53.655 SO libspdk_trace_parser.so.6.0 00:03:53.915 SYMLINK libspdk_trace_parser.so 00:03:53.915 SYMLINK libspdk_util.so 00:03:54.174 CC lib/env_dpdk/env.o 00:03:54.174 CC lib/rdma_utils/rdma_utils.o 00:03:54.174 CC lib/env_dpdk/init.o 00:03:54.174 CC lib/env_dpdk/threads.o 00:03:54.174 CC lib/env_dpdk/memory.o 00:03:54.174 CC lib/vmd/vmd.o 00:03:54.174 CC lib/env_dpdk/pci.o 00:03:54.174 CC lib/json/json_parse.o 00:03:54.174 CC lib/conf/conf.o 00:03:54.174 CC lib/idxd/idxd.o 00:03:54.174 CC lib/env_dpdk/pci_ioat.o 00:03:54.432 CC lib/json/json_util.o 00:03:54.432 LIB libspdk_conf.a 00:03:54.432 CC lib/json/json_write.o 00:03:54.432 SO libspdk_conf.so.6.0 00:03:54.432 LIB libspdk_rdma_utils.a 00:03:54.432 SO libspdk_rdma_utils.so.1.0 00:03:54.432 CC lib/vmd/led.o 00:03:54.432 SYMLINK libspdk_conf.so 00:03:54.432 CC lib/idxd/idxd_user.o 00:03:54.432 CC lib/idxd/idxd_kernel.o 00:03:54.432 SYMLINK libspdk_rdma_utils.so 00:03:54.432 CC lib/env_dpdk/pci_virtio.o 00:03:54.432 CC lib/env_dpdk/pci_vmd.o 00:03:54.689 CC lib/env_dpdk/pci_idxd.o 00:03:54.689 LIB libspdk_vmd.a 00:03:54.689 CC lib/env_dpdk/pci_event.o 00:03:54.689 CC lib/env_dpdk/sigbus_handler.o 00:03:54.689 LIB libspdk_json.a 00:03:54.689 SO libspdk_vmd.so.6.0 00:03:54.690 SO libspdk_json.so.6.0 00:03:54.690 CC lib/env_dpdk/pci_dpdk.o 00:03:54.690 SYMLINK libspdk_vmd.so 00:03:54.690 LIB libspdk_idxd.a 00:03:54.690 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:54.690 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:54.948 SO libspdk_idxd.so.12.1 00:03:54.948 SYMLINK libspdk_json.so 00:03:54.948 CC lib/rdma_provider/common.o 00:03:54.948 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:54.948 SYMLINK libspdk_idxd.so 00:03:55.206 CC lib/jsonrpc/jsonrpc_server.o 00:03:55.206 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:55.206 CC lib/jsonrpc/jsonrpc_client.o 00:03:55.206 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:55.206 LIB libspdk_rdma_provider.a 00:03:55.206 SO libspdk_rdma_provider.so.7.0 00:03:55.206 SYMLINK libspdk_rdma_provider.so 00:03:55.463 LIB libspdk_jsonrpc.a 00:03:55.463 SO libspdk_jsonrpc.so.6.0 00:03:55.463 LIB libspdk_env_dpdk.a 00:03:55.721 SYMLINK libspdk_jsonrpc.so 00:03:55.721 SO libspdk_env_dpdk.so.15.1 00:03:55.983 SYMLINK libspdk_env_dpdk.so 00:03:55.983 CC lib/rpc/rpc.o 00:03:56.243 LIB libspdk_rpc.a 00:03:56.243 SO libspdk_rpc.so.6.0 00:03:56.502 SYMLINK libspdk_rpc.so 00:03:56.759 CC lib/notify/notify.o 00:03:56.759 CC lib/notify/notify_rpc.o 00:03:56.759 CC lib/trace/trace.o 00:03:56.759 CC lib/trace/trace_flags.o 00:03:56.759 CC lib/trace/trace_rpc.o 00:03:56.759 CC lib/keyring/keyring.o 00:03:56.759 CC lib/keyring/keyring_rpc.o 00:03:57.016 LIB libspdk_notify.a 00:03:57.016 SO libspdk_notify.so.6.0 00:03:57.016 LIB libspdk_keyring.a 00:03:57.016 LIB libspdk_trace.a 00:03:57.016 SO libspdk_keyring.so.2.0 00:03:57.016 SYMLINK libspdk_notify.so 00:03:57.016 SO libspdk_trace.so.11.0 00:03:57.016 SYMLINK libspdk_keyring.so 00:03:57.282 SYMLINK libspdk_trace.so 00:03:57.542 CC lib/thread/thread.o 00:03:57.542 CC lib/thread/iobuf.o 00:03:57.542 CC lib/sock/sock.o 00:03:57.542 CC lib/sock/sock_rpc.o 00:03:58.109 LIB libspdk_sock.a 00:03:58.109 SO libspdk_sock.so.10.0 00:03:58.109 SYMLINK libspdk_sock.so 00:03:58.675 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:58.675 CC lib/nvme/nvme_ns_cmd.o 00:03:58.675 CC lib/nvme/nvme_ctrlr.o 00:03:58.675 CC lib/nvme/nvme_ns.o 00:03:58.675 CC lib/nvme/nvme_fabric.o 00:03:58.675 CC lib/nvme/nvme_pcie_common.o 00:03:58.675 CC lib/nvme/nvme_pcie.o 00:03:58.675 CC lib/nvme/nvme_qpair.o 00:03:58.675 CC lib/nvme/nvme.o 00:03:59.241 LIB libspdk_thread.a 00:03:59.241 SO libspdk_thread.so.11.0 00:03:59.241 CC lib/nvme/nvme_quirks.o 00:03:59.241 CC lib/nvme/nvme_transport.o 00:03:59.241 CC lib/nvme/nvme_discovery.o 00:03:59.241 SYMLINK libspdk_thread.so 00:03:59.241 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:59.498 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:59.498 CC lib/nvme/nvme_tcp.o 00:03:59.498 CC lib/accel/accel.o 00:03:59.498 CC lib/nvme/nvme_opal.o 00:03:59.755 CC lib/accel/accel_rpc.o 00:04:00.012 CC lib/accel/accel_sw.o 00:04:00.012 CC lib/nvme/nvme_io_msg.o 00:04:00.012 CC lib/nvme/nvme_poll_group.o 00:04:00.012 CC lib/nvme/nvme_zns.o 00:04:00.012 CC lib/nvme/nvme_stubs.o 00:04:00.012 CC lib/blob/blobstore.o 00:04:00.271 CC lib/nvme/nvme_auth.o 00:04:00.836 CC lib/init/json_config.o 00:04:00.836 CC lib/init/subsystem.o 00:04:00.836 CC lib/init/subsystem_rpc.o 00:04:00.836 CC lib/blob/request.o 00:04:00.836 LIB libspdk_accel.a 00:04:00.836 SO libspdk_accel.so.16.0 00:04:00.836 CC lib/virtio/virtio.o 00:04:00.836 CC lib/blob/zeroes.o 00:04:00.836 CC lib/blob/blob_bs_dev.o 00:04:00.836 CC lib/init/rpc.o 00:04:00.836 SYMLINK libspdk_accel.so 00:04:00.836 CC lib/virtio/virtio_vhost_user.o 00:04:00.836 CC lib/virtio/virtio_vfio_user.o 00:04:01.095 CC lib/nvme/nvme_cuse.o 00:04:01.095 LIB libspdk_init.a 00:04:01.095 CC lib/nvme/nvme_rdma.o 00:04:01.095 SO libspdk_init.so.6.0 00:04:01.095 CC lib/virtio/virtio_pci.o 00:04:01.354 SYMLINK libspdk_init.so 00:04:01.354 CC lib/fsdev/fsdev.o 00:04:01.354 CC lib/fsdev/fsdev_io.o 00:04:01.354 CC lib/fsdev/fsdev_rpc.o 00:04:01.354 CC lib/bdev/bdev.o 00:04:01.354 CC lib/bdev/bdev_rpc.o 00:04:01.354 CC lib/bdev/bdev_zone.o 00:04:01.354 CC lib/event/app.o 00:04:01.612 LIB libspdk_virtio.a 00:04:01.612 CC lib/bdev/part.o 00:04:01.612 CC lib/event/reactor.o 00:04:01.612 SO libspdk_virtio.so.7.0 00:04:01.612 CC lib/event/log_rpc.o 00:04:01.612 SYMLINK libspdk_virtio.so 00:04:01.612 CC lib/event/app_rpc.o 00:04:01.871 CC lib/event/scheduler_static.o 00:04:01.871 CC lib/bdev/scsi_nvme.o 00:04:01.871 LIB libspdk_fsdev.a 00:04:01.871 SO libspdk_fsdev.so.2.0 00:04:02.159 SYMLINK libspdk_fsdev.so 00:04:02.159 LIB libspdk_event.a 00:04:02.159 SO libspdk_event.so.14.0 00:04:02.159 SYMLINK libspdk_event.so 00:04:02.417 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:02.417 LIB libspdk_nvme.a 00:04:02.676 SO libspdk_nvme.so.15.0 00:04:02.933 LIB libspdk_fuse_dispatcher.a 00:04:02.933 SYMLINK libspdk_nvme.so 00:04:03.191 SO libspdk_fuse_dispatcher.so.1.0 00:04:03.191 LIB libspdk_blob.a 00:04:03.191 SYMLINK libspdk_fuse_dispatcher.so 00:04:03.191 SO libspdk_blob.so.11.0 00:04:03.449 SYMLINK libspdk_blob.so 00:04:03.708 CC lib/blobfs/blobfs.o 00:04:03.708 CC lib/blobfs/tree.o 00:04:03.708 CC lib/lvol/lvol.o 00:04:04.302 LIB libspdk_bdev.a 00:04:04.302 SO libspdk_bdev.so.17.0 00:04:04.607 SYMLINK libspdk_bdev.so 00:04:04.607 LIB libspdk_blobfs.a 00:04:04.607 SO libspdk_blobfs.so.10.0 00:04:04.607 CC lib/ftl/ftl_core.o 00:04:04.607 CC lib/ftl/ftl_layout.o 00:04:04.607 CC lib/ftl/ftl_init.o 00:04:04.607 CC lib/scsi/dev.o 00:04:04.607 CC lib/ftl/ftl_debug.o 00:04:04.607 CC lib/nbd/nbd.o 00:04:04.607 CC lib/ublk/ublk.o 00:04:04.607 CC lib/nvmf/ctrlr.o 00:04:04.607 SYMLINK libspdk_blobfs.so 00:04:04.607 CC lib/scsi/lun.o 00:04:04.864 LIB libspdk_lvol.a 00:04:04.864 SO libspdk_lvol.so.10.0 00:04:04.864 SYMLINK libspdk_lvol.so 00:04:04.864 CC lib/scsi/port.o 00:04:04.864 CC lib/scsi/scsi.o 00:04:04.864 CC lib/nbd/nbd_rpc.o 00:04:04.864 CC lib/scsi/scsi_bdev.o 00:04:05.121 CC lib/ftl/ftl_io.o 00:04:05.121 CC lib/scsi/scsi_pr.o 00:04:05.121 CC lib/ublk/ublk_rpc.o 00:04:05.121 CC lib/ftl/ftl_sb.o 00:04:05.121 CC lib/ftl/ftl_l2p.o 00:04:05.121 LIB libspdk_nbd.a 00:04:05.121 CC lib/ftl/ftl_l2p_flat.o 00:04:05.121 SO libspdk_nbd.so.7.0 00:04:05.379 CC lib/scsi/scsi_rpc.o 00:04:05.379 SYMLINK libspdk_nbd.so 00:04:05.379 CC lib/scsi/task.o 00:04:05.379 CC lib/ftl/ftl_nv_cache.o 00:04:05.379 CC lib/ftl/ftl_band.o 00:04:05.379 LIB libspdk_ublk.a 00:04:05.379 CC lib/nvmf/ctrlr_discovery.o 00:04:05.379 CC lib/nvmf/ctrlr_bdev.o 00:04:05.379 CC lib/nvmf/subsystem.o 00:04:05.379 SO libspdk_ublk.so.3.0 00:04:05.379 CC lib/ftl/ftl_band_ops.o 00:04:05.379 SYMLINK libspdk_ublk.so 00:04:05.379 CC lib/ftl/ftl_writer.o 00:04:05.638 LIB libspdk_scsi.a 00:04:05.638 CC lib/nvmf/nvmf.o 00:04:05.638 SO libspdk_scsi.so.9.0 00:04:05.638 SYMLINK libspdk_scsi.so 00:04:05.638 CC lib/nvmf/nvmf_rpc.o 00:04:05.638 CC lib/nvmf/transport.o 00:04:05.897 CC lib/ftl/ftl_rq.o 00:04:05.897 CC lib/nvmf/tcp.o 00:04:05.897 CC lib/nvmf/stubs.o 00:04:06.155 CC lib/iscsi/conn.o 00:04:06.155 CC lib/vhost/vhost.o 00:04:06.414 CC lib/ftl/ftl_reloc.o 00:04:06.414 CC lib/vhost/vhost_rpc.o 00:04:06.414 CC lib/nvmf/mdns_server.o 00:04:06.414 CC lib/nvmf/rdma.o 00:04:06.414 CC lib/nvmf/auth.o 00:04:06.673 CC lib/iscsi/init_grp.o 00:04:06.673 CC lib/ftl/ftl_l2p_cache.o 00:04:06.673 CC lib/vhost/vhost_scsi.o 00:04:06.932 CC lib/iscsi/iscsi.o 00:04:06.932 CC lib/vhost/vhost_blk.o 00:04:06.932 CC lib/vhost/rte_vhost_user.o 00:04:06.932 CC lib/iscsi/param.o 00:04:07.191 CC lib/ftl/ftl_p2l.o 00:04:07.191 CC lib/ftl/ftl_p2l_log.o 00:04:07.449 CC lib/iscsi/portal_grp.o 00:04:07.449 CC lib/iscsi/tgt_node.o 00:04:07.449 CC lib/ftl/mngt/ftl_mngt.o 00:04:07.449 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:07.708 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:07.708 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:07.708 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:07.708 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:07.708 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:07.708 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:07.967 CC lib/iscsi/iscsi_subsystem.o 00:04:07.967 CC lib/iscsi/iscsi_rpc.o 00:04:07.967 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:07.967 LIB libspdk_vhost.a 00:04:07.967 SO libspdk_vhost.so.8.0 00:04:07.967 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:07.967 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:07.967 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:07.967 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:08.226 SYMLINK libspdk_vhost.so 00:04:08.226 CC lib/ftl/utils/ftl_conf.o 00:04:08.226 CC lib/iscsi/task.o 00:04:08.226 CC lib/ftl/utils/ftl_md.o 00:04:08.226 CC lib/ftl/utils/ftl_mempool.o 00:04:08.226 CC lib/ftl/utils/ftl_bitmap.o 00:04:08.226 CC lib/ftl/utils/ftl_property.o 00:04:08.226 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:08.226 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:08.226 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:08.487 LIB libspdk_nvmf.a 00:04:08.487 LIB libspdk_iscsi.a 00:04:08.487 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:08.487 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:08.487 SO libspdk_iscsi.so.8.0 00:04:08.487 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:08.487 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:08.487 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:08.487 SO libspdk_nvmf.so.20.0 00:04:08.487 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:08.487 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:08.746 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:08.746 SYMLINK libspdk_iscsi.so 00:04:08.746 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:08.746 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:08.746 CC lib/ftl/base/ftl_base_dev.o 00:04:08.746 CC lib/ftl/base/ftl_base_bdev.o 00:04:08.746 CC lib/ftl/ftl_trace.o 00:04:08.746 SYMLINK libspdk_nvmf.so 00:04:09.005 LIB libspdk_ftl.a 00:04:09.265 SO libspdk_ftl.so.9.0 00:04:09.524 SYMLINK libspdk_ftl.so 00:04:10.094 CC module/env_dpdk/env_dpdk_rpc.o 00:04:10.094 CC module/accel/ioat/accel_ioat.o 00:04:10.094 CC module/fsdev/aio/fsdev_aio.o 00:04:10.094 CC module/sock/posix/posix.o 00:04:10.094 CC module/blob/bdev/blob_bdev.o 00:04:10.094 CC module/keyring/linux/keyring.o 00:04:10.094 CC module/keyring/file/keyring.o 00:04:10.094 CC module/accel/dsa/accel_dsa.o 00:04:10.094 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:10.094 CC module/accel/error/accel_error.o 00:04:10.353 LIB libspdk_env_dpdk_rpc.a 00:04:10.353 SO libspdk_env_dpdk_rpc.so.6.0 00:04:10.353 CC module/keyring/linux/keyring_rpc.o 00:04:10.353 CC module/keyring/file/keyring_rpc.o 00:04:10.353 SYMLINK libspdk_env_dpdk_rpc.so 00:04:10.353 CC module/accel/error/accel_error_rpc.o 00:04:10.353 CC module/accel/ioat/accel_ioat_rpc.o 00:04:10.353 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:10.353 LIB libspdk_scheduler_dynamic.a 00:04:10.353 LIB libspdk_blob_bdev.a 00:04:10.353 SO libspdk_scheduler_dynamic.so.4.0 00:04:10.612 SO libspdk_blob_bdev.so.11.0 00:04:10.612 LIB libspdk_keyring_linux.a 00:04:10.612 CC module/accel/dsa/accel_dsa_rpc.o 00:04:10.612 LIB libspdk_keyring_file.a 00:04:10.612 LIB libspdk_accel_error.a 00:04:10.612 SO libspdk_keyring_linux.so.1.0 00:04:10.612 SYMLINK libspdk_scheduler_dynamic.so 00:04:10.612 LIB libspdk_accel_ioat.a 00:04:10.612 SO libspdk_accel_error.so.2.0 00:04:10.612 SO libspdk_keyring_file.so.2.0 00:04:10.612 SYMLINK libspdk_blob_bdev.so 00:04:10.612 CC module/fsdev/aio/linux_aio_mgr.o 00:04:10.612 SO libspdk_accel_ioat.so.6.0 00:04:10.612 SYMLINK libspdk_keyring_linux.so 00:04:10.612 SYMLINK libspdk_keyring_file.so 00:04:10.612 SYMLINK libspdk_accel_error.so 00:04:10.612 SYMLINK libspdk_accel_ioat.so 00:04:10.612 LIB libspdk_accel_dsa.a 00:04:10.612 SO libspdk_accel_dsa.so.5.0 00:04:10.872 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:10.872 CC module/scheduler/gscheduler/gscheduler.o 00:04:10.872 SYMLINK libspdk_accel_dsa.so 00:04:10.872 LIB libspdk_fsdev_aio.a 00:04:10.872 CC module/accel/iaa/accel_iaa.o 00:04:10.872 SO libspdk_fsdev_aio.so.1.0 00:04:10.872 LIB libspdk_scheduler_dpdk_governor.a 00:04:10.872 CC module/bdev/delay/vbdev_delay.o 00:04:10.872 CC module/bdev/error/vbdev_error.o 00:04:10.872 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:10.872 CC module/blobfs/bdev/blobfs_bdev.o 00:04:10.872 SYMLINK libspdk_fsdev_aio.so 00:04:10.872 CC module/bdev/error/vbdev_error_rpc.o 00:04:10.872 LIB libspdk_sock_posix.a 00:04:10.872 LIB libspdk_scheduler_gscheduler.a 00:04:11.132 CC module/bdev/gpt/gpt.o 00:04:11.132 CC module/bdev/lvol/vbdev_lvol.o 00:04:11.132 SO libspdk_scheduler_gscheduler.so.4.0 00:04:11.132 SO libspdk_sock_posix.so.6.0 00:04:11.132 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:11.132 CC module/bdev/gpt/vbdev_gpt.o 00:04:11.132 CC module/accel/iaa/accel_iaa_rpc.o 00:04:11.132 SYMLINK libspdk_scheduler_gscheduler.so 00:04:11.132 SYMLINK libspdk_sock_posix.so 00:04:11.132 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:11.132 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:11.132 LIB libspdk_accel_iaa.a 00:04:11.132 LIB libspdk_bdev_error.a 00:04:11.391 SO libspdk_accel_iaa.so.3.0 00:04:11.391 CC module/bdev/malloc/bdev_malloc.o 00:04:11.391 SO libspdk_bdev_error.so.6.0 00:04:11.391 CC module/bdev/null/bdev_null.o 00:04:11.391 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:11.391 LIB libspdk_bdev_gpt.a 00:04:11.391 SYMLINK libspdk_accel_iaa.so 00:04:11.391 CC module/bdev/nvme/bdev_nvme.o 00:04:11.391 LIB libspdk_blobfs_bdev.a 00:04:11.391 SO libspdk_bdev_gpt.so.6.0 00:04:11.391 SYMLINK libspdk_bdev_error.so 00:04:11.391 SO libspdk_blobfs_bdev.so.6.0 00:04:11.391 SYMLINK libspdk_bdev_gpt.so 00:04:11.391 SYMLINK libspdk_blobfs_bdev.so 00:04:11.652 LIB libspdk_bdev_delay.a 00:04:11.652 LIB libspdk_bdev_lvol.a 00:04:11.652 CC module/bdev/passthru/vbdev_passthru.o 00:04:11.652 SO libspdk_bdev_delay.so.6.0 00:04:11.652 CC module/bdev/raid/bdev_raid.o 00:04:11.652 SO libspdk_bdev_lvol.so.6.0 00:04:11.652 CC module/bdev/null/bdev_null_rpc.o 00:04:11.652 SYMLINK libspdk_bdev_delay.so 00:04:11.652 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:11.652 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:11.652 SYMLINK libspdk_bdev_lvol.so 00:04:11.652 CC module/bdev/split/vbdev_split.o 00:04:11.652 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:11.652 CC module/bdev/aio/bdev_aio.o 00:04:11.912 LIB libspdk_bdev_null.a 00:04:11.912 SO libspdk_bdev_null.so.6.0 00:04:11.912 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:11.912 CC module/bdev/ftl/bdev_ftl.o 00:04:11.912 LIB libspdk_bdev_malloc.a 00:04:11.912 SO libspdk_bdev_malloc.so.6.0 00:04:11.912 CC module/bdev/split/vbdev_split_rpc.o 00:04:11.912 SYMLINK libspdk_bdev_null.so 00:04:11.912 SYMLINK libspdk_bdev_malloc.so 00:04:12.171 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:12.171 LIB libspdk_bdev_passthru.a 00:04:12.171 CC module/bdev/aio/bdev_aio_rpc.o 00:04:12.171 SO libspdk_bdev_passthru.so.6.0 00:04:12.171 LIB libspdk_bdev_split.a 00:04:12.171 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:12.171 SO libspdk_bdev_split.so.6.0 00:04:12.171 SYMLINK libspdk_bdev_passthru.so 00:04:12.171 CC module/bdev/iscsi/bdev_iscsi.o 00:04:12.171 CC module/bdev/nvme/nvme_rpc.o 00:04:12.171 LIB libspdk_bdev_zone_block.a 00:04:12.171 LIB libspdk_bdev_aio.a 00:04:12.171 SYMLINK libspdk_bdev_split.so 00:04:12.431 SO libspdk_bdev_zone_block.so.6.0 00:04:12.431 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:12.431 CC module/bdev/nvme/bdev_mdns_client.o 00:04:12.431 SO libspdk_bdev_aio.so.6.0 00:04:12.431 SYMLINK libspdk_bdev_zone_block.so 00:04:12.431 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:12.431 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:12.431 SYMLINK libspdk_bdev_aio.so 00:04:12.431 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:12.431 LIB libspdk_bdev_ftl.a 00:04:12.431 SO libspdk_bdev_ftl.so.6.0 00:04:12.431 CC module/bdev/nvme/vbdev_opal.o 00:04:12.431 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:12.431 SYMLINK libspdk_bdev_ftl.so 00:04:12.431 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:12.691 CC module/bdev/raid/bdev_raid_rpc.o 00:04:12.691 CC module/bdev/raid/bdev_raid_sb.o 00:04:12.691 LIB libspdk_bdev_iscsi.a 00:04:12.691 CC module/bdev/raid/raid0.o 00:04:12.691 SO libspdk_bdev_iscsi.so.6.0 00:04:12.691 CC module/bdev/raid/raid1.o 00:04:12.691 CC module/bdev/raid/concat.o 00:04:12.691 SYMLINK libspdk_bdev_iscsi.so 00:04:12.949 LIB libspdk_bdev_virtio.a 00:04:12.949 SO libspdk_bdev_virtio.so.6.0 00:04:12.949 LIB libspdk_bdev_raid.a 00:04:12.949 SYMLINK libspdk_bdev_virtio.so 00:04:13.209 SO libspdk_bdev_raid.so.6.0 00:04:13.209 SYMLINK libspdk_bdev_raid.so 00:04:13.779 LIB libspdk_bdev_nvme.a 00:04:14.038 SO libspdk_bdev_nvme.so.7.1 00:04:14.038 SYMLINK libspdk_bdev_nvme.so 00:04:14.975 CC module/event/subsystems/scheduler/scheduler.o 00:04:14.975 CC module/event/subsystems/keyring/keyring.o 00:04:14.975 CC module/event/subsystems/sock/sock.o 00:04:14.975 CC module/event/subsystems/fsdev/fsdev.o 00:04:14.975 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:14.975 CC module/event/subsystems/vmd/vmd.o 00:04:14.975 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:14.975 CC module/event/subsystems/iobuf/iobuf.o 00:04:14.975 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:14.975 LIB libspdk_event_keyring.a 00:04:14.975 LIB libspdk_event_scheduler.a 00:04:14.975 LIB libspdk_event_vhost_blk.a 00:04:14.975 SO libspdk_event_keyring.so.1.0 00:04:14.975 LIB libspdk_event_fsdev.a 00:04:14.975 LIB libspdk_event_sock.a 00:04:14.975 LIB libspdk_event_vmd.a 00:04:14.975 SO libspdk_event_scheduler.so.4.0 00:04:14.975 SO libspdk_event_vhost_blk.so.3.0 00:04:14.975 SO libspdk_event_sock.so.5.0 00:04:14.975 SO libspdk_event_fsdev.so.1.0 00:04:14.975 LIB libspdk_event_iobuf.a 00:04:14.975 SO libspdk_event_vmd.so.6.0 00:04:14.975 SYMLINK libspdk_event_keyring.so 00:04:14.975 SYMLINK libspdk_event_scheduler.so 00:04:14.975 SO libspdk_event_iobuf.so.3.0 00:04:14.975 SYMLINK libspdk_event_vhost_blk.so 00:04:14.975 SYMLINK libspdk_event_sock.so 00:04:14.975 SYMLINK libspdk_event_fsdev.so 00:04:14.975 SYMLINK libspdk_event_vmd.so 00:04:14.975 SYMLINK libspdk_event_iobuf.so 00:04:15.540 CC module/event/subsystems/accel/accel.o 00:04:15.541 LIB libspdk_event_accel.a 00:04:15.799 SO libspdk_event_accel.so.6.0 00:04:15.799 SYMLINK libspdk_event_accel.so 00:04:16.093 CC module/event/subsystems/bdev/bdev.o 00:04:16.355 LIB libspdk_event_bdev.a 00:04:16.355 SO libspdk_event_bdev.so.6.0 00:04:16.615 SYMLINK libspdk_event_bdev.so 00:04:16.873 CC module/event/subsystems/nbd/nbd.o 00:04:16.873 CC module/event/subsystems/scsi/scsi.o 00:04:16.873 CC module/event/subsystems/ublk/ublk.o 00:04:16.873 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:16.873 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:17.132 LIB libspdk_event_nbd.a 00:04:17.132 LIB libspdk_event_ublk.a 00:04:17.132 LIB libspdk_event_scsi.a 00:04:17.132 SO libspdk_event_scsi.so.6.0 00:04:17.132 SO libspdk_event_nbd.so.6.0 00:04:17.132 SO libspdk_event_ublk.so.3.0 00:04:17.132 SYMLINK libspdk_event_scsi.so 00:04:17.132 SYMLINK libspdk_event_ublk.so 00:04:17.132 SYMLINK libspdk_event_nbd.so 00:04:17.132 LIB libspdk_event_nvmf.a 00:04:17.132 SO libspdk_event_nvmf.so.6.0 00:04:17.391 SYMLINK libspdk_event_nvmf.so 00:04:17.391 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:17.391 CC module/event/subsystems/iscsi/iscsi.o 00:04:17.650 LIB libspdk_event_vhost_scsi.a 00:04:17.650 SO libspdk_event_vhost_scsi.so.3.0 00:04:17.650 LIB libspdk_event_iscsi.a 00:04:17.650 SO libspdk_event_iscsi.so.6.0 00:04:17.650 SYMLINK libspdk_event_vhost_scsi.so 00:04:17.909 SYMLINK libspdk_event_iscsi.so 00:04:18.168 SO libspdk.so.6.0 00:04:18.168 SYMLINK libspdk.so 00:04:18.426 CC app/spdk_lspci/spdk_lspci.o 00:04:18.426 CC app/trace_record/trace_record.o 00:04:18.426 CXX app/trace/trace.o 00:04:18.426 CC app/nvmf_tgt/nvmf_main.o 00:04:18.426 CC app/iscsi_tgt/iscsi_tgt.o 00:04:18.426 CC app/spdk_tgt/spdk_tgt.o 00:04:18.426 CC test/thread/poller_perf/poller_perf.o 00:04:18.426 CC examples/util/zipf/zipf.o 00:04:18.426 CC test/dma/test_dma/test_dma.o 00:04:18.426 CC test/app/bdev_svc/bdev_svc.o 00:04:18.426 LINK spdk_lspci 00:04:18.685 LINK poller_perf 00:04:18.685 LINK spdk_trace_record 00:04:18.685 LINK zipf 00:04:18.685 LINK nvmf_tgt 00:04:18.685 LINK iscsi_tgt 00:04:18.685 LINK spdk_tgt 00:04:18.685 LINK bdev_svc 00:04:18.685 LINK spdk_trace 00:04:18.944 CC app/spdk_nvme_perf/perf.o 00:04:18.944 CC test/app/histogram_perf/histogram_perf.o 00:04:18.944 CC examples/ioat/perf/perf.o 00:04:18.944 LINK test_dma 00:04:18.944 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:18.944 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:18.944 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:19.203 LINK histogram_perf 00:04:19.203 CC test/app/jsoncat/jsoncat.o 00:04:19.203 CC test/app/stub/stub.o 00:04:19.203 LINK ioat_perf 00:04:19.203 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:19.203 LINK jsoncat 00:04:19.462 LINK stub 00:04:19.462 TEST_HEADER include/spdk/accel.h 00:04:19.462 TEST_HEADER include/spdk/accel_module.h 00:04:19.462 TEST_HEADER include/spdk/assert.h 00:04:19.462 TEST_HEADER include/spdk/barrier.h 00:04:19.462 TEST_HEADER include/spdk/base64.h 00:04:19.462 TEST_HEADER include/spdk/bdev.h 00:04:19.462 TEST_HEADER include/spdk/bdev_module.h 00:04:19.462 TEST_HEADER include/spdk/bdev_zone.h 00:04:19.462 TEST_HEADER include/spdk/bit_array.h 00:04:19.462 TEST_HEADER include/spdk/bit_pool.h 00:04:19.462 TEST_HEADER include/spdk/blob_bdev.h 00:04:19.462 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:19.462 TEST_HEADER include/spdk/blobfs.h 00:04:19.462 TEST_HEADER include/spdk/blob.h 00:04:19.462 TEST_HEADER include/spdk/conf.h 00:04:19.462 TEST_HEADER include/spdk/config.h 00:04:19.462 TEST_HEADER include/spdk/cpuset.h 00:04:19.462 TEST_HEADER include/spdk/crc16.h 00:04:19.462 TEST_HEADER include/spdk/crc32.h 00:04:19.462 TEST_HEADER include/spdk/crc64.h 00:04:19.462 TEST_HEADER include/spdk/dif.h 00:04:19.462 LINK nvme_fuzz 00:04:19.462 TEST_HEADER include/spdk/dma.h 00:04:19.462 TEST_HEADER include/spdk/endian.h 00:04:19.462 TEST_HEADER include/spdk/env_dpdk.h 00:04:19.462 CC examples/ioat/verify/verify.o 00:04:19.462 TEST_HEADER include/spdk/env.h 00:04:19.462 TEST_HEADER include/spdk/event.h 00:04:19.462 TEST_HEADER include/spdk/fd_group.h 00:04:19.462 TEST_HEADER include/spdk/fd.h 00:04:19.462 TEST_HEADER include/spdk/file.h 00:04:19.462 TEST_HEADER include/spdk/fsdev.h 00:04:19.462 TEST_HEADER include/spdk/fsdev_module.h 00:04:19.462 TEST_HEADER include/spdk/ftl.h 00:04:19.462 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:19.463 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:19.463 TEST_HEADER include/spdk/gpt_spec.h 00:04:19.463 TEST_HEADER include/spdk/hexlify.h 00:04:19.463 TEST_HEADER include/spdk/histogram_data.h 00:04:19.463 TEST_HEADER include/spdk/idxd.h 00:04:19.463 TEST_HEADER include/spdk/idxd_spec.h 00:04:19.463 TEST_HEADER include/spdk/init.h 00:04:19.463 TEST_HEADER include/spdk/ioat.h 00:04:19.463 TEST_HEADER include/spdk/ioat_spec.h 00:04:19.463 TEST_HEADER include/spdk/iscsi_spec.h 00:04:19.463 TEST_HEADER include/spdk/json.h 00:04:19.463 TEST_HEADER include/spdk/jsonrpc.h 00:04:19.463 TEST_HEADER include/spdk/keyring.h 00:04:19.463 TEST_HEADER include/spdk/keyring_module.h 00:04:19.463 TEST_HEADER include/spdk/likely.h 00:04:19.463 TEST_HEADER include/spdk/log.h 00:04:19.463 TEST_HEADER include/spdk/lvol.h 00:04:19.463 TEST_HEADER include/spdk/md5.h 00:04:19.463 TEST_HEADER include/spdk/memory.h 00:04:19.463 TEST_HEADER include/spdk/mmio.h 00:04:19.463 TEST_HEADER include/spdk/nbd.h 00:04:19.463 TEST_HEADER include/spdk/net.h 00:04:19.463 TEST_HEADER include/spdk/notify.h 00:04:19.463 TEST_HEADER include/spdk/nvme.h 00:04:19.463 TEST_HEADER include/spdk/nvme_intel.h 00:04:19.463 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:19.463 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:19.463 TEST_HEADER include/spdk/nvme_spec.h 00:04:19.463 TEST_HEADER include/spdk/nvme_zns.h 00:04:19.463 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:19.722 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:19.722 TEST_HEADER include/spdk/nvmf.h 00:04:19.722 TEST_HEADER include/spdk/nvmf_spec.h 00:04:19.722 TEST_HEADER include/spdk/nvmf_transport.h 00:04:19.722 TEST_HEADER include/spdk/opal.h 00:04:19.722 TEST_HEADER include/spdk/opal_spec.h 00:04:19.722 TEST_HEADER include/spdk/pci_ids.h 00:04:19.722 TEST_HEADER include/spdk/pipe.h 00:04:19.722 TEST_HEADER include/spdk/queue.h 00:04:19.722 TEST_HEADER include/spdk/reduce.h 00:04:19.722 TEST_HEADER include/spdk/rpc.h 00:04:19.722 TEST_HEADER include/spdk/scheduler.h 00:04:19.722 TEST_HEADER include/spdk/scsi.h 00:04:19.722 TEST_HEADER include/spdk/scsi_spec.h 00:04:19.722 TEST_HEADER include/spdk/sock.h 00:04:19.722 TEST_HEADER include/spdk/stdinc.h 00:04:19.722 TEST_HEADER include/spdk/string.h 00:04:19.722 TEST_HEADER include/spdk/thread.h 00:04:19.722 TEST_HEADER include/spdk/trace.h 00:04:19.722 TEST_HEADER include/spdk/trace_parser.h 00:04:19.722 TEST_HEADER include/spdk/tree.h 00:04:19.722 TEST_HEADER include/spdk/ublk.h 00:04:19.722 TEST_HEADER include/spdk/util.h 00:04:19.722 TEST_HEADER include/spdk/uuid.h 00:04:19.722 TEST_HEADER include/spdk/version.h 00:04:19.722 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:19.722 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:19.722 TEST_HEADER include/spdk/vhost.h 00:04:19.722 TEST_HEADER include/spdk/vmd.h 00:04:19.722 TEST_HEADER include/spdk/xor.h 00:04:19.722 TEST_HEADER include/spdk/zipf.h 00:04:19.722 CXX test/cpp_headers/accel.o 00:04:19.722 LINK vhost_fuzz 00:04:19.722 LINK interrupt_tgt 00:04:19.722 CC examples/thread/thread/thread_ex.o 00:04:19.722 LINK verify 00:04:19.722 LINK spdk_nvme_perf 00:04:19.722 CC examples/sock/hello_world/hello_sock.o 00:04:19.722 CXX test/cpp_headers/accel_module.o 00:04:19.980 CC examples/vmd/lsvmd/lsvmd.o 00:04:19.980 CXX test/cpp_headers/assert.o 00:04:19.980 CC examples/vmd/led/led.o 00:04:19.980 LINK thread 00:04:19.980 LINK lsvmd 00:04:19.980 CC app/spdk_nvme_identify/identify.o 00:04:19.980 LINK hello_sock 00:04:20.238 CC app/spdk_nvme_discover/discovery_aer.o 00:04:20.238 CC examples/idxd/perf/perf.o 00:04:20.238 CXX test/cpp_headers/barrier.o 00:04:20.238 LINK led 00:04:20.497 CXX test/cpp_headers/base64.o 00:04:20.497 CXX test/cpp_headers/bdev.o 00:04:20.497 LINK spdk_nvme_discover 00:04:20.497 CC app/spdk_top/spdk_top.o 00:04:20.497 CC examples/accel/perf/accel_perf.o 00:04:20.497 LINK idxd_perf 00:04:20.497 CXX test/cpp_headers/bdev_module.o 00:04:20.756 CC examples/blob/hello_world/hello_blob.o 00:04:20.756 CC examples/blob/cli/blobcli.o 00:04:20.756 LINK iscsi_fuzz 00:04:20.756 CC examples/nvme/hello_world/hello_world.o 00:04:20.756 CXX test/cpp_headers/bdev_zone.o 00:04:21.015 LINK hello_blob 00:04:21.015 LINK spdk_nvme_identify 00:04:21.015 LINK hello_world 00:04:21.015 CXX test/cpp_headers/bit_array.o 00:04:21.015 LINK accel_perf 00:04:21.015 CC test/env/vtophys/vtophys.o 00:04:21.273 CC test/env/mem_callbacks/mem_callbacks.o 00:04:21.273 CXX test/cpp_headers/bit_pool.o 00:04:21.273 LINK blobcli 00:04:21.273 LINK vtophys 00:04:21.273 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:21.273 CC examples/nvme/reconnect/reconnect.o 00:04:21.273 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:21.531 CC test/env/memory/memory_ut.o 00:04:21.531 CXX test/cpp_headers/blob_bdev.o 00:04:21.531 LINK spdk_top 00:04:21.531 CXX test/cpp_headers/blobfs_bdev.o 00:04:21.531 LINK env_dpdk_post_init 00:04:21.788 LINK reconnect 00:04:21.788 CXX test/cpp_headers/blobfs.o 00:04:21.788 LINK mem_callbacks 00:04:21.788 CC examples/nvme/arbitration/arbitration.o 00:04:21.788 CC app/vhost/vhost.o 00:04:21.788 LINK nvme_manage 00:04:22.047 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:22.047 CXX test/cpp_headers/blob.o 00:04:22.047 CC test/env/pci/pci_ut.o 00:04:22.047 LINK vhost 00:04:22.047 CC examples/nvme/hotplug/hotplug.o 00:04:22.306 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:22.306 CXX test/cpp_headers/conf.o 00:04:22.306 CC examples/nvme/abort/abort.o 00:04:22.306 LINK hello_fsdev 00:04:22.306 LINK arbitration 00:04:22.306 LINK cmb_copy 00:04:22.306 LINK hotplug 00:04:22.565 CXX test/cpp_headers/config.o 00:04:22.565 CXX test/cpp_headers/cpuset.o 00:04:22.565 CC app/spdk_dd/spdk_dd.o 00:04:22.565 LINK pci_ut 00:04:22.565 CXX test/cpp_headers/crc16.o 00:04:22.565 CXX test/cpp_headers/crc32.o 00:04:22.565 CXX test/cpp_headers/crc64.o 00:04:22.565 CXX test/cpp_headers/dif.o 00:04:22.565 LINK abort 00:04:22.565 LINK memory_ut 00:04:22.824 CXX test/cpp_headers/dma.o 00:04:22.824 CXX test/cpp_headers/endian.o 00:04:22.824 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:22.824 CXX test/cpp_headers/env_dpdk.o 00:04:22.824 CXX test/cpp_headers/env.o 00:04:22.824 CXX test/cpp_headers/event.o 00:04:22.824 CXX test/cpp_headers/fd_group.o 00:04:22.824 LINK spdk_dd 00:04:23.085 LINK pmr_persistence 00:04:23.085 CXX test/cpp_headers/fd.o 00:04:23.085 CXX test/cpp_headers/file.o 00:04:23.346 CC app/fio/nvme/fio_plugin.o 00:04:23.346 CC test/rpc_client/rpc_client_test.o 00:04:23.346 CC examples/bdev/hello_world/hello_bdev.o 00:04:23.346 CC test/event/event_perf/event_perf.o 00:04:23.346 CXX test/cpp_headers/fsdev.o 00:04:23.346 CC test/nvme/aer/aer.o 00:04:23.346 CC test/event/reactor_perf/reactor_perf.o 00:04:23.346 CC test/event/reactor/reactor.o 00:04:23.607 CC test/event/app_repeat/app_repeat.o 00:04:23.607 LINK rpc_client_test 00:04:23.607 LINK event_perf 00:04:23.607 LINK reactor 00:04:23.607 LINK hello_bdev 00:04:23.607 CXX test/cpp_headers/fsdev_module.o 00:04:23.607 LINK reactor_perf 00:04:23.866 LINK app_repeat 00:04:23.866 LINK aer 00:04:23.866 CXX test/cpp_headers/ftl.o 00:04:23.866 CXX test/cpp_headers/fuse_dispatcher.o 00:04:23.866 CC test/event/scheduler/scheduler.o 00:04:23.866 LINK spdk_nvme 00:04:24.124 CC app/fio/bdev/fio_plugin.o 00:04:24.124 CXX test/cpp_headers/gpt_spec.o 00:04:24.125 CC test/nvme/reset/reset.o 00:04:24.125 CC examples/bdev/bdevperf/bdevperf.o 00:04:24.125 CC test/nvme/sgl/sgl.o 00:04:24.125 CC test/accel/dif/dif.o 00:04:24.125 CC test/nvme/e2edp/nvme_dp.o 00:04:24.125 LINK scheduler 00:04:24.385 CC test/blobfs/mkfs/mkfs.o 00:04:24.385 CXX test/cpp_headers/hexlify.o 00:04:24.385 LINK sgl 00:04:24.385 LINK reset 00:04:24.385 LINK spdk_bdev 00:04:24.690 LINK mkfs 00:04:24.690 LINK nvme_dp 00:04:24.690 CXX test/cpp_headers/histogram_data.o 00:04:24.690 CC test/nvme/overhead/overhead.o 00:04:24.690 CXX test/cpp_headers/idxd.o 00:04:24.690 CC test/nvme/err_injection/err_injection.o 00:04:24.690 CC test/nvme/startup/startup.o 00:04:24.690 CC test/nvme/reserve/reserve.o 00:04:24.690 LINK dif 00:04:24.948 CC test/nvme/simple_copy/simple_copy.o 00:04:24.948 CXX test/cpp_headers/idxd_spec.o 00:04:24.948 LINK err_injection 00:04:24.948 LINK overhead 00:04:24.948 LINK startup 00:04:24.948 LINK bdevperf 00:04:24.948 LINK reserve 00:04:24.948 CC test/lvol/esnap/esnap.o 00:04:24.948 LINK simple_copy 00:04:24.948 CXX test/cpp_headers/init.o 00:04:25.208 CXX test/cpp_headers/ioat.o 00:04:25.208 CC test/nvme/connect_stress/connect_stress.o 00:04:25.208 CC test/nvme/boot_partition/boot_partition.o 00:04:25.208 CC test/bdev/bdevio/bdevio.o 00:04:25.208 CC test/nvme/compliance/nvme_compliance.o 00:04:25.208 CXX test/cpp_headers/ioat_spec.o 00:04:25.208 CC test/nvme/fused_ordering/fused_ordering.o 00:04:25.468 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:25.468 CC test/nvme/fdp/fdp.o 00:04:25.468 LINK connect_stress 00:04:25.468 LINK boot_partition 00:04:25.468 LINK doorbell_aers 00:04:25.728 CXX test/cpp_headers/iscsi_spec.o 00:04:25.728 LINK fused_ordering 00:04:25.728 CXX test/cpp_headers/json.o 00:04:25.728 LINK bdevio 00:04:25.728 LINK nvme_compliance 00:04:25.728 LINK fdp 00:04:25.728 CXX test/cpp_headers/jsonrpc.o 00:04:25.728 CC test/nvme/cuse/cuse.o 00:04:25.728 CXX test/cpp_headers/keyring.o 00:04:25.987 CXX test/cpp_headers/keyring_module.o 00:04:25.987 CXX test/cpp_headers/likely.o 00:04:25.987 CXX test/cpp_headers/log.o 00:04:25.987 CXX test/cpp_headers/lvol.o 00:04:25.987 CXX test/cpp_headers/md5.o 00:04:25.987 CXX test/cpp_headers/memory.o 00:04:25.987 CXX test/cpp_headers/mmio.o 00:04:25.987 CC examples/nvmf/nvmf/nvmf.o 00:04:25.987 CXX test/cpp_headers/nbd.o 00:04:25.987 CXX test/cpp_headers/net.o 00:04:25.987 CXX test/cpp_headers/notify.o 00:04:26.246 CXX test/cpp_headers/nvme.o 00:04:26.246 CXX test/cpp_headers/nvme_intel.o 00:04:26.246 CXX test/cpp_headers/nvme_ocssd.o 00:04:26.246 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:26.246 CXX test/cpp_headers/nvme_spec.o 00:04:26.246 CXX test/cpp_headers/nvme_zns.o 00:04:26.246 CXX test/cpp_headers/nvmf_cmd.o 00:04:26.246 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:26.246 CXX test/cpp_headers/nvmf.o 00:04:26.246 CXX test/cpp_headers/nvmf_spec.o 00:04:26.505 LINK nvmf 00:04:26.505 CXX test/cpp_headers/nvmf_transport.o 00:04:26.505 CXX test/cpp_headers/opal.o 00:04:26.505 CXX test/cpp_headers/opal_spec.o 00:04:26.505 CXX test/cpp_headers/pci_ids.o 00:04:26.505 CXX test/cpp_headers/pipe.o 00:04:26.505 CXX test/cpp_headers/queue.o 00:04:26.505 CXX test/cpp_headers/reduce.o 00:04:26.505 CXX test/cpp_headers/rpc.o 00:04:26.505 CXX test/cpp_headers/scheduler.o 00:04:26.505 CXX test/cpp_headers/scsi.o 00:04:26.505 CXX test/cpp_headers/scsi_spec.o 00:04:26.505 CXX test/cpp_headers/sock.o 00:04:26.505 CXX test/cpp_headers/stdinc.o 00:04:26.764 CXX test/cpp_headers/string.o 00:04:26.764 CXX test/cpp_headers/thread.o 00:04:26.764 CXX test/cpp_headers/trace.o 00:04:26.764 CXX test/cpp_headers/trace_parser.o 00:04:26.764 CXX test/cpp_headers/tree.o 00:04:26.764 CXX test/cpp_headers/ublk.o 00:04:26.764 CXX test/cpp_headers/util.o 00:04:26.764 CXX test/cpp_headers/uuid.o 00:04:26.764 CXX test/cpp_headers/version.o 00:04:26.764 CXX test/cpp_headers/vfio_user_pci.o 00:04:26.764 CXX test/cpp_headers/vfio_user_spec.o 00:04:26.764 CXX test/cpp_headers/vhost.o 00:04:26.764 CXX test/cpp_headers/vmd.o 00:04:27.023 CXX test/cpp_headers/xor.o 00:04:27.023 CXX test/cpp_headers/zipf.o 00:04:27.023 LINK cuse 00:04:29.562 LINK esnap 00:04:30.129 00:04:30.129 real 1m43.899s 00:04:30.129 user 9m14.540s 00:04:30.129 sys 2m16.451s 00:04:30.129 15:12:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:30.129 15:12:16 make -- common/autotest_common.sh@10 -- $ set +x 00:04:30.130 ************************************ 00:04:30.130 END TEST make 00:04:30.130 ************************************ 00:04:30.130 15:12:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:30.130 15:12:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:30.130 15:12:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:30.130 15:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.130 15:12:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:30.130 15:12:16 -- pm/common@44 -- $ pid=5256 00:04:30.130 15:12:16 -- pm/common@50 -- $ kill -TERM 5256 00:04:30.130 15:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.130 15:12:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:30.130 15:12:16 -- pm/common@44 -- $ pid=5258 00:04:30.130 15:12:16 -- pm/common@50 -- $ kill -TERM 5258 00:04:30.130 15:12:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:30.130 15:12:16 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:30.130 15:12:16 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.130 15:12:16 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.130 15:12:16 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.389 15:12:16 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.389 15:12:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.389 15:12:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.389 15:12:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.389 15:12:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.389 15:12:16 -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.389 15:12:16 -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.389 15:12:16 -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.389 15:12:16 -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.389 15:12:16 -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.389 15:12:16 -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.389 15:12:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.389 15:12:16 -- scripts/common.sh@344 -- # case "$op" in 00:04:30.389 15:12:16 -- scripts/common.sh@345 -- # : 1 00:04:30.389 15:12:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.389 15:12:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.389 15:12:16 -- scripts/common.sh@365 -- # decimal 1 00:04:30.389 15:12:16 -- scripts/common.sh@353 -- # local d=1 00:04:30.389 15:12:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.389 15:12:16 -- scripts/common.sh@355 -- # echo 1 00:04:30.389 15:12:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.389 15:12:16 -- scripts/common.sh@366 -- # decimal 2 00:04:30.389 15:12:16 -- scripts/common.sh@353 -- # local d=2 00:04:30.389 15:12:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.389 15:12:16 -- scripts/common.sh@355 -- # echo 2 00:04:30.389 15:12:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.389 15:12:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.389 15:12:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.389 15:12:16 -- scripts/common.sh@368 -- # return 0 00:04:30.389 15:12:16 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.389 15:12:16 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.389 --rc genhtml_branch_coverage=1 00:04:30.389 --rc genhtml_function_coverage=1 00:04:30.389 --rc genhtml_legend=1 00:04:30.389 --rc geninfo_all_blocks=1 00:04:30.389 --rc geninfo_unexecuted_blocks=1 00:04:30.389 00:04:30.389 ' 00:04:30.389 15:12:16 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.389 --rc genhtml_branch_coverage=1 00:04:30.389 --rc genhtml_function_coverage=1 00:04:30.389 --rc genhtml_legend=1 00:04:30.389 --rc geninfo_all_blocks=1 00:04:30.389 --rc geninfo_unexecuted_blocks=1 00:04:30.389 00:04:30.389 ' 00:04:30.389 15:12:16 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.389 --rc genhtml_branch_coverage=1 00:04:30.389 --rc genhtml_function_coverage=1 00:04:30.389 --rc genhtml_legend=1 00:04:30.389 --rc geninfo_all_blocks=1 00:04:30.389 --rc geninfo_unexecuted_blocks=1 00:04:30.389 00:04:30.389 ' 00:04:30.389 15:12:16 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.389 --rc genhtml_branch_coverage=1 00:04:30.389 --rc genhtml_function_coverage=1 00:04:30.389 --rc genhtml_legend=1 00:04:30.389 --rc geninfo_all_blocks=1 00:04:30.389 --rc geninfo_unexecuted_blocks=1 00:04:30.389 00:04:30.389 ' 00:04:30.389 15:12:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.389 15:12:16 -- nvmf/common.sh@7 -- # uname -s 00:04:30.389 15:12:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.389 15:12:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.389 15:12:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.389 15:12:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.389 15:12:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.390 15:12:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.390 15:12:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.390 15:12:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.390 15:12:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.390 15:12:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.390 15:12:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:04:30.390 15:12:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:04:30.390 15:12:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.390 15:12:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.390 15:12:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:30.390 15:12:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.390 15:12:16 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.390 15:12:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.390 15:12:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.390 15:12:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.390 15:12:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.390 15:12:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.390 15:12:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.390 15:12:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.390 15:12:16 -- paths/export.sh@5 -- # export PATH 00:04:30.390 15:12:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.390 15:12:16 -- nvmf/common.sh@51 -- # : 0 00:04:30.390 15:12:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.390 15:12:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.390 15:12:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.390 15:12:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.390 15:12:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.390 15:12:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.390 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.390 15:12:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.390 15:12:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.390 15:12:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.390 15:12:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:30.390 15:12:16 -- spdk/autotest.sh@32 -- # uname -s 00:04:30.390 15:12:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:30.390 15:12:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:30.390 15:12:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:30.390 15:12:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:30.390 15:12:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:30.390 15:12:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:30.390 15:12:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:30.390 15:12:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:30.390 15:12:16 -- spdk/autotest.sh@48 -- # udevadm_pid=56264 00:04:30.390 15:12:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:30.390 15:12:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:30.390 15:12:16 -- pm/common@17 -- # local monitor 00:04:30.390 15:12:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.390 15:12:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.390 15:12:16 -- pm/common@25 -- # sleep 1 00:04:30.390 15:12:16 -- pm/common@21 -- # date +%s 00:04:30.390 15:12:16 -- pm/common@21 -- # date +%s 00:04:30.390 15:12:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732115536 00:04:30.390 15:12:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732115536 00:04:30.390 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732115536_collect-cpu-load.pm.log 00:04:30.390 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732115536_collect-vmstat.pm.log 00:04:31.411 15:12:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:31.411 15:12:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:31.411 15:12:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.411 15:12:17 -- common/autotest_common.sh@10 -- # set +x 00:04:31.411 15:12:17 -- spdk/autotest.sh@59 -- # create_test_list 00:04:31.411 15:12:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:31.411 15:12:17 -- common/autotest_common.sh@10 -- # set +x 00:04:31.411 15:12:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:31.411 15:12:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:31.411 15:12:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:31.411 15:12:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:31.411 15:12:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:31.411 15:12:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:31.411 15:12:17 -- common/autotest_common.sh@1457 -- # uname 00:04:31.411 15:12:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:31.411 15:12:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:31.411 15:12:17 -- common/autotest_common.sh@1477 -- # uname 00:04:31.411 15:12:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:31.411 15:12:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:31.411 15:12:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:31.671 lcov: LCOV version 1.15 00:04:31.671 15:12:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:46.553 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:46.553 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:04.638 15:12:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:04.638 15:12:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.638 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:05:04.638 15:12:49 -- spdk/autotest.sh@78 -- # rm -f 00:05:04.638 15:12:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.638 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:04.638 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:04.638 15:12:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:04.638 15:12:50 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:04.638 15:12:50 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:04.638 15:12:50 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:04.638 15:12:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:04.638 15:12:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:04.638 15:12:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:04.638 15:12:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.638 15:12:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.638 15:12:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:04.638 15:12:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:04.638 15:12:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:04.638 15:12:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.638 15:12:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.638 15:12:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:04.638 15:12:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:04.638 15:12:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:04.638 15:12:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:04.638 15:12:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.638 15:12:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:04.638 15:12:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:04.638 15:12:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:04.638 15:12:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:04.638 15:12:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.638 15:12:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:04.638 15:12:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.638 15:12:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.638 15:12:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:04.638 15:12:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:04.638 15:12:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:04.638 No valid GPT data, bailing 00:05:04.638 15:12:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.638 15:12:50 -- scripts/common.sh@394 -- # pt= 00:05:04.639 15:12:50 -- scripts/common.sh@395 -- # return 1 00:05:04.639 15:12:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:04.639 1+0 records in 00:05:04.639 1+0 records out 00:05:04.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623467 s, 168 MB/s 00:05:04.639 15:12:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.639 15:12:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.639 15:12:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:04.639 15:12:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:04.639 15:12:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:04.639 No valid GPT data, bailing 00:05:04.639 15:12:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:04.639 15:12:50 -- scripts/common.sh@394 -- # pt= 00:05:04.639 15:12:50 -- scripts/common.sh@395 -- # return 1 00:05:04.639 15:12:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:04.639 1+0 records in 00:05:04.639 1+0 records out 00:05:04.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652197 s, 161 MB/s 00:05:04.639 15:12:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.639 15:12:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.639 15:12:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:04.639 15:12:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:04.639 15:12:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:04.639 No valid GPT data, bailing 00:05:04.639 15:12:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:04.639 15:12:50 -- scripts/common.sh@394 -- # pt= 00:05:04.639 15:12:50 -- scripts/common.sh@395 -- # return 1 00:05:04.639 15:12:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:04.639 1+0 records in 00:05:04.639 1+0 records out 00:05:04.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512396 s, 205 MB/s 00:05:04.639 15:12:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.639 15:12:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.639 15:12:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:04.639 15:12:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:04.639 15:12:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:04.639 No valid GPT data, bailing 00:05:04.639 15:12:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:04.639 15:12:50 -- scripts/common.sh@394 -- # pt= 00:05:04.639 15:12:50 -- scripts/common.sh@395 -- # return 1 00:05:04.639 15:12:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:04.639 1+0 records in 00:05:04.639 1+0 records out 00:05:04.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639245 s, 164 MB/s 00:05:04.639 15:12:50 -- spdk/autotest.sh@105 -- # sync 00:05:04.639 15:12:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.639 15:12:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.639 15:12:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:07.173 15:12:53 -- spdk/autotest.sh@111 -- # uname -s 00:05:07.173 15:12:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:07.173 15:12:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:07.173 15:12:53 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:08.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.112 Hugepages 00:05:08.112 node hugesize free / total 00:05:08.112 node0 1048576kB 0 / 0 00:05:08.112 node0 2048kB 0 / 0 00:05:08.112 00:05:08.112 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.112 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:08.112 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:08.371 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:08.371 15:12:54 -- spdk/autotest.sh@117 -- # uname -s 00:05:08.371 15:12:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:08.371 15:12:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:08.371 15:12:54 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.320 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.320 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.320 15:12:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:10.257 15:12:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:10.257 15:12:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:10.257 15:12:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:10.257 15:12:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:10.257 15:12:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:10.257 15:12:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:10.257 15:12:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.257 15:12:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:10.257 15:12:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:10.515 15:12:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:10.515 15:12:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:10.515 15:12:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.163 Waiting for block devices as requested 00:05:11.163 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.163 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.163 15:12:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:11.163 15:12:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:11.163 15:12:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:11.163 15:12:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:11.163 15:12:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:11.163 15:12:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:11.163 15:12:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:11.163 15:12:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:11.163 15:12:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:11.163 15:12:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:11.163 15:12:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:11.163 15:12:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:11.163 15:12:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:11.163 15:12:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:11.163 15:12:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:11.163 15:12:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:11.163 15:12:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.163 15:12:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:11.163 15:12:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:11.422 15:12:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:11.422 15:12:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:11.422 15:12:57 -- common/autotest_common.sh@1543 -- # continue 00:05:11.422 15:12:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:11.422 15:12:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:11.422 15:12:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:11.422 15:12:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:11.422 15:12:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:11.422 15:12:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:11.422 15:12:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:11.422 15:12:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:11.422 15:12:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:11.422 15:12:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:11.422 15:12:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:11.422 15:12:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:11.422 15:12:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:11.422 15:12:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:11.422 15:12:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:11.422 15:12:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:11.422 15:12:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:11.422 15:12:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:11.422 15:12:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.422 15:12:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:11.422 15:12:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:11.422 15:12:57 -- common/autotest_common.sh@1543 -- # continue 00:05:11.422 15:12:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:11.422 15:12:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.422 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.422 15:12:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:11.422 15:12:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.422 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.422 15:12:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.362 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.362 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.362 15:12:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:12.362 15:12:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.362 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:05:12.621 15:12:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:12.621 15:12:58 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:12.621 15:12:58 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.621 15:12:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:12.621 15:12:58 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:12.621 15:12:58 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:12.621 15:12:58 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:12.621 15:12:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:12.621 15:12:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.621 15:12:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.621 15:12:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.621 15:12:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:12.621 15:12:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.621 15:12:58 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:12.621 15:12:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:12.621 15:12:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.621 15:12:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:12.621 15:12:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:12.621 15:12:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.621 15:12:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.621 15:12:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:12.621 15:12:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:12.621 15:12:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.621 15:12:58 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:12.621 15:12:58 -- common/autotest_common.sh@1572 -- # return 0 00:05:12.621 15:12:58 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:12.621 15:12:58 -- common/autotest_common.sh@1580 -- # return 0 00:05:12.621 15:12:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:12.621 15:12:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:12.621 15:12:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:12.621 15:12:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:12.621 15:12:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:12.621 15:12:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.621 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:05:12.621 15:12:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:12.621 15:12:59 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.621 15:12:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.621 15:12:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.621 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:12.621 ************************************ 00:05:12.621 START TEST env 00:05:12.621 ************************************ 00:05:12.621 15:12:59 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.881 * Looking for test storage... 00:05:12.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:12.881 15:12:59 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.881 15:12:59 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.881 15:12:59 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.881 15:12:59 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.881 15:12:59 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.881 15:12:59 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.881 15:12:59 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.881 15:12:59 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.881 15:12:59 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.881 15:12:59 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.881 15:12:59 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.881 15:12:59 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.881 15:12:59 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.881 15:12:59 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.881 15:12:59 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.881 15:12:59 env -- scripts/common.sh@344 -- # case "$op" in 00:05:12.881 15:12:59 env -- scripts/common.sh@345 -- # : 1 00:05:12.881 15:12:59 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.881 15:12:59 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.881 15:12:59 env -- scripts/common.sh@365 -- # decimal 1 00:05:12.881 15:12:59 env -- scripts/common.sh@353 -- # local d=1 00:05:12.881 15:12:59 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.881 15:12:59 env -- scripts/common.sh@355 -- # echo 1 00:05:12.881 15:12:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.882 15:12:59 env -- scripts/common.sh@366 -- # decimal 2 00:05:12.882 15:12:59 env -- scripts/common.sh@353 -- # local d=2 00:05:12.882 15:12:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.882 15:12:59 env -- scripts/common.sh@355 -- # echo 2 00:05:12.882 15:12:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.882 15:12:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.882 15:12:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.882 15:12:59 env -- scripts/common.sh@368 -- # return 0 00:05:12.882 15:12:59 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.882 15:12:59 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.882 --rc genhtml_branch_coverage=1 00:05:12.882 --rc genhtml_function_coverage=1 00:05:12.882 --rc genhtml_legend=1 00:05:12.882 --rc geninfo_all_blocks=1 00:05:12.882 --rc geninfo_unexecuted_blocks=1 00:05:12.882 00:05:12.882 ' 00:05:12.882 15:12:59 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.882 --rc genhtml_branch_coverage=1 00:05:12.882 --rc genhtml_function_coverage=1 00:05:12.882 --rc genhtml_legend=1 00:05:12.882 --rc geninfo_all_blocks=1 00:05:12.882 --rc geninfo_unexecuted_blocks=1 00:05:12.882 00:05:12.882 ' 00:05:12.882 15:12:59 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.882 --rc genhtml_branch_coverage=1 00:05:12.882 --rc genhtml_function_coverage=1 00:05:12.882 --rc genhtml_legend=1 00:05:12.882 --rc geninfo_all_blocks=1 00:05:12.882 --rc geninfo_unexecuted_blocks=1 00:05:12.882 00:05:12.882 ' 00:05:12.882 15:12:59 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.882 --rc genhtml_branch_coverage=1 00:05:12.882 --rc genhtml_function_coverage=1 00:05:12.882 --rc genhtml_legend=1 00:05:12.882 --rc geninfo_all_blocks=1 00:05:12.882 --rc geninfo_unexecuted_blocks=1 00:05:12.882 00:05:12.882 ' 00:05:12.882 15:12:59 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:12.882 15:12:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.882 15:12:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.882 15:12:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.882 ************************************ 00:05:12.882 START TEST env_memory 00:05:12.882 ************************************ 00:05:12.882 15:12:59 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:12.882 00:05:12.882 00:05:12.882 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.882 http://cunit.sourceforge.net/ 00:05:12.882 00:05:12.882 00:05:12.882 Suite: memory 00:05:12.882 Test: alloc and free memory map ...[2024-11-20 15:12:59.316443] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:12.882 passed 00:05:12.882 Test: mem map translation ...[2024-11-20 15:12:59.337291] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:12.882 [2024-11-20 15:12:59.337357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:12.882 [2024-11-20 15:12:59.337398] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:12.882 [2024-11-20 15:12:59.337407] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.141 passed 00:05:13.141 Test: mem map registration ...[2024-11-20 15:12:59.375414] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:13.141 [2024-11-20 15:12:59.375453] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:13.141 passed 00:05:13.141 Test: mem map adjacent registrations ...passed 00:05:13.141 00:05:13.141 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.141 suites 1 1 n/a 0 0 00:05:13.141 tests 4 4 4 0 0 00:05:13.141 asserts 152 152 152 0 n/a 00:05:13.141 00:05:13.141 Elapsed time = 0.138 seconds 00:05:13.141 00:05:13.141 real 0m0.163s 00:05:13.141 user 0m0.141s 00:05:13.141 sys 0m0.018s 00:05:13.141 15:12:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.141 15:12:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:13.141 ************************************ 00:05:13.141 END TEST env_memory 00:05:13.141 ************************************ 00:05:13.141 15:12:59 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.141 15:12:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.141 15:12:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.141 15:12:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.141 ************************************ 00:05:13.141 START TEST env_vtophys 00:05:13.141 ************************************ 00:05:13.141 15:12:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.141 EAL: lib.eal log level changed from notice to debug 00:05:13.141 EAL: Detected lcore 0 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 1 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 2 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 3 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 4 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 5 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 6 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 7 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 8 as core 0 on socket 0 00:05:13.141 EAL: Detected lcore 9 as core 0 on socket 0 00:05:13.142 EAL: Maximum logical cores by configuration: 128 00:05:13.142 EAL: Detected CPU lcores: 10 00:05:13.142 EAL: Detected NUMA nodes: 1 00:05:13.142 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:13.142 EAL: Detected shared linkage of DPDK 00:05:13.142 EAL: No shared files mode enabled, IPC will be disabled 00:05:13.142 EAL: Selected IOVA mode 'PA' 00:05:13.142 EAL: Probing VFIO support... 00:05:13.142 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:13.142 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:13.142 EAL: Ask a virtual area of 0x2e000 bytes 00:05:13.142 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:13.142 EAL: Setting up physically contiguous memory... 00:05:13.142 EAL: Setting maximum number of open files to 524288 00:05:13.142 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:13.142 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:13.142 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.142 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:13.142 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.142 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.142 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:13.142 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:13.142 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.142 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:13.142 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.142 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.142 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:13.142 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:13.142 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.142 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:13.142 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.142 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.142 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:13.142 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:13.142 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.142 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:13.142 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.142 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.142 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:13.142 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:13.142 EAL: Hugepages will be freed exactly as allocated. 00:05:13.142 EAL: No shared files mode enabled, IPC is disabled 00:05:13.142 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: TSC frequency is ~2490000 KHz 00:05:13.402 EAL: Main lcore 0 is ready (tid=7efd81120a00;cpuset=[0]) 00:05:13.402 EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 0 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 2MB 00:05:13.402 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:13.402 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:13.402 EAL: Mem event callback 'spdk:(nil)' registered 00:05:13.402 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:13.402 00:05:13.402 00:05:13.402 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.402 http://cunit.sourceforge.net/ 00:05:13.402 00:05:13.402 00:05:13.402 Suite: components_suite 00:05:13.402 Test: vtophys_malloc_test ...passed 00:05:13.402 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 4 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.402 EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 4 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.402 EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 4 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.402 EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 4 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 18MB 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was shrunk by 18MB 00:05:13.402 EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 4 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 34MB 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was shrunk by 34MB 00:05:13.402 EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 4 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 66MB 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was shrunk by 66MB 00:05:13.402 EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 4 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 130MB 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was shrunk by 130MB 00:05:13.402 EAL: Trying to obtain current memory policy. 00:05:13.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.402 EAL: Restoring previous memory policy: 4 00:05:13.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.402 EAL: request: mp_malloc_sync 00:05:13.402 EAL: No shared files mode enabled, IPC is disabled 00:05:13.402 EAL: Heap on socket 0 was expanded by 258MB 00:05:13.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.663 EAL: request: mp_malloc_sync 00:05:13.663 EAL: No shared files mode enabled, IPC is disabled 00:05:13.663 EAL: Heap on socket 0 was shrunk by 258MB 00:05:13.663 EAL: Trying to obtain current memory policy. 00:05:13.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.663 EAL: Restoring previous memory policy: 4 00:05:13.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.663 EAL: request: mp_malloc_sync 00:05:13.663 EAL: No shared files mode enabled, IPC is disabled 00:05:13.663 EAL: Heap on socket 0 was expanded by 514MB 00:05:13.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.921 EAL: request: mp_malloc_sync 00:05:13.921 EAL: No shared files mode enabled, IPC is disabled 00:05:13.921 EAL: Heap on socket 0 was shrunk by 514MB 00:05:13.921 EAL: Trying to obtain current memory policy. 00:05:13.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.921 EAL: Restoring previous memory policy: 4 00:05:13.921 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.921 EAL: request: mp_malloc_sync 00:05:13.921 EAL: No shared files mode enabled, IPC is disabled 00:05:13.921 EAL: Heap on socket 0 was expanded by 1026MB 00:05:14.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.439 passed 00:05:14.439 00:05:14.439 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.439 suites 1 1 n/a 0 0 00:05:14.439 tests 2 2 2 0 0 00:05:14.439 asserts 5456 5456 5456 0 n/a 00:05:14.439 00:05:14.439 Elapsed time = 0.980 seconds 00:05:14.439 EAL: request: mp_malloc_sync 00:05:14.439 EAL: No shared files mode enabled, IPC is disabled 00:05:14.439 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:14.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.439 EAL: request: mp_malloc_sync 00:05:14.439 EAL: No shared files mode enabled, IPC is disabled 00:05:14.439 EAL: Heap on socket 0 was shrunk by 2MB 00:05:14.439 EAL: No shared files mode enabled, IPC is disabled 00:05:14.439 EAL: No shared files mode enabled, IPC is disabled 00:05:14.439 EAL: No shared files mode enabled, IPC is disabled 00:05:14.439 00:05:14.439 real 0m1.197s 00:05:14.439 user 0m0.661s 00:05:14.439 sys 0m0.410s 00:05:14.439 15:13:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.439 ************************************ 00:05:14.439 15:13:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:14.439 END TEST env_vtophys 00:05:14.439 ************************************ 00:05:14.439 15:13:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:14.439 15:13:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.439 15:13:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.439 15:13:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.439 ************************************ 00:05:14.439 START TEST env_pci 00:05:14.439 ************************************ 00:05:14.439 15:13:00 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:14.439 00:05:14.439 00:05:14.439 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.439 http://cunit.sourceforge.net/ 00:05:14.439 00:05:14.439 00:05:14.439 Suite: pci 00:05:14.439 Test: pci_hook ...[2024-11-20 15:13:00.788554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58531 has claimed it 00:05:14.439 passed 00:05:14.439 00:05:14.439 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.439 suites 1 1 n/a 0 0 00:05:14.439 tests 1 1 1 0 0 00:05:14.439 asserts 25 25 25 0 n/a 00:05:14.439 00:05:14.440 Elapsed time = 0.003 seconds 00:05:14.440 EAL: Cannot find device (10000:00:01.0) 00:05:14.440 EAL: Failed to attach device on primary process 00:05:14.440 00:05:14.440 real 0m0.029s 00:05:14.440 user 0m0.013s 00:05:14.440 sys 0m0.016s 00:05:14.440 15:13:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.440 15:13:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:14.440 ************************************ 00:05:14.440 END TEST env_pci 00:05:14.440 ************************************ 00:05:14.440 15:13:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:14.440 15:13:00 env -- env/env.sh@15 -- # uname 00:05:14.440 15:13:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:14.440 15:13:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:14.440 15:13:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.440 15:13:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:14.440 15:13:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.440 15:13:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.440 ************************************ 00:05:14.440 START TEST env_dpdk_post_init 00:05:14.440 ************************************ 00:05:14.440 15:13:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.440 EAL: Detected CPU lcores: 10 00:05:14.440 EAL: Detected NUMA nodes: 1 00:05:14.440 EAL: Detected shared linkage of DPDK 00:05:14.440 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.440 EAL: Selected IOVA mode 'PA' 00:05:14.699 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.699 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:14.699 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:14.699 Starting DPDK initialization... 00:05:14.699 Starting SPDK post initialization... 00:05:14.699 SPDK NVMe probe 00:05:14.699 Attaching to 0000:00:10.0 00:05:14.699 Attaching to 0000:00:11.0 00:05:14.699 Attached to 0000:00:10.0 00:05:14.699 Attached to 0000:00:11.0 00:05:14.699 Cleaning up... 00:05:14.699 00:05:14.699 real 0m0.219s 00:05:14.699 user 0m0.067s 00:05:14.699 sys 0m0.054s 00:05:14.699 15:13:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.699 15:13:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.699 ************************************ 00:05:14.699 END TEST env_dpdk_post_init 00:05:14.699 ************************************ 00:05:14.699 15:13:01 env -- env/env.sh@26 -- # uname 00:05:14.699 15:13:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:14.699 15:13:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.699 15:13:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.699 15:13:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.699 15:13:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.699 ************************************ 00:05:14.699 START TEST env_mem_callbacks 00:05:14.699 ************************************ 00:05:14.699 15:13:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.957 EAL: Detected CPU lcores: 10 00:05:14.957 EAL: Detected NUMA nodes: 1 00:05:14.957 EAL: Detected shared linkage of DPDK 00:05:14.957 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.957 EAL: Selected IOVA mode 'PA' 00:05:14.957 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.957 00:05:14.957 00:05:14.957 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.957 http://cunit.sourceforge.net/ 00:05:14.957 00:05:14.957 00:05:14.957 Suite: memory 00:05:14.957 Test: test ... 00:05:14.957 register 0x200000200000 2097152 00:05:14.957 malloc 3145728 00:05:14.957 register 0x200000400000 4194304 00:05:14.957 buf 0x200000500000 len 3145728 PASSED 00:05:14.957 malloc 64 00:05:14.957 buf 0x2000004fff40 len 64 PASSED 00:05:14.957 malloc 4194304 00:05:14.957 register 0x200000800000 6291456 00:05:14.957 buf 0x200000a00000 len 4194304 PASSED 00:05:14.957 free 0x200000500000 3145728 00:05:14.957 free 0x2000004fff40 64 00:05:14.957 unregister 0x200000400000 4194304 PASSED 00:05:14.957 free 0x200000a00000 4194304 00:05:14.957 unregister 0x200000800000 6291456 PASSED 00:05:14.957 malloc 8388608 00:05:14.957 register 0x200000400000 10485760 00:05:14.957 buf 0x200000600000 len 8388608 PASSED 00:05:14.957 free 0x200000600000 8388608 00:05:14.957 unregister 0x200000400000 10485760 PASSED 00:05:14.957 passed 00:05:14.957 00:05:14.957 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.957 suites 1 1 n/a 0 0 00:05:14.957 tests 1 1 1 0 0 00:05:14.957 asserts 15 15 15 0 n/a 00:05:14.957 00:05:14.957 Elapsed time = 0.010 seconds 00:05:14.957 00:05:14.957 real 0m0.158s 00:05:14.957 user 0m0.021s 00:05:14.957 sys 0m0.036s 00:05:14.957 15:13:01 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.957 15:13:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:14.957 ************************************ 00:05:14.957 END TEST env_mem_callbacks 00:05:14.957 ************************************ 00:05:14.957 00:05:14.957 real 0m2.353s 00:05:14.957 user 0m1.145s 00:05:14.957 sys 0m0.895s 00:05:14.957 15:13:01 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.957 15:13:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.957 ************************************ 00:05:14.957 END TEST env 00:05:14.957 ************************************ 00:05:14.957 15:13:01 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:14.958 15:13:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.958 15:13:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.958 15:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:15.216 ************************************ 00:05:15.216 START TEST rpc 00:05:15.216 ************************************ 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:15.216 * Looking for test storage... 00:05:15.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.216 15:13:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.216 15:13:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.216 15:13:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.216 15:13:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.216 15:13:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.216 15:13:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.216 15:13:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.216 15:13:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.216 15:13:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.216 15:13:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.216 15:13:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.216 15:13:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.216 15:13:01 rpc -- scripts/common.sh@345 -- # : 1 00:05:15.216 15:13:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.216 15:13:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.216 15:13:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.216 15:13:01 rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.216 15:13:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.216 15:13:01 rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.216 15:13:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.216 15:13:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.216 15:13:01 rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.216 15:13:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.216 15:13:01 rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.216 15:13:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.216 15:13:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.216 15:13:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.216 15:13:01 rpc -- scripts/common.sh@368 -- # return 0 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.216 --rc genhtml_branch_coverage=1 00:05:15.216 --rc genhtml_function_coverage=1 00:05:15.216 --rc genhtml_legend=1 00:05:15.216 --rc geninfo_all_blocks=1 00:05:15.216 --rc geninfo_unexecuted_blocks=1 00:05:15.216 00:05:15.216 ' 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.216 --rc genhtml_branch_coverage=1 00:05:15.216 --rc genhtml_function_coverage=1 00:05:15.216 --rc genhtml_legend=1 00:05:15.216 --rc geninfo_all_blocks=1 00:05:15.216 --rc geninfo_unexecuted_blocks=1 00:05:15.216 00:05:15.216 ' 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.216 --rc genhtml_branch_coverage=1 00:05:15.216 --rc genhtml_function_coverage=1 00:05:15.216 --rc genhtml_legend=1 00:05:15.216 --rc geninfo_all_blocks=1 00:05:15.216 --rc geninfo_unexecuted_blocks=1 00:05:15.216 00:05:15.216 ' 00:05:15.216 15:13:01 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.216 --rc genhtml_branch_coverage=1 00:05:15.216 --rc genhtml_function_coverage=1 00:05:15.216 --rc genhtml_legend=1 00:05:15.216 --rc geninfo_all_blocks=1 00:05:15.216 --rc geninfo_unexecuted_blocks=1 00:05:15.216 00:05:15.216 ' 00:05:15.217 15:13:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58653 00:05:15.217 15:13:01 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:15.217 15:13:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.217 15:13:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58653 00:05:15.217 15:13:01 rpc -- common/autotest_common.sh@835 -- # '[' -z 58653 ']' 00:05:15.217 15:13:01 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.217 15:13:01 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.217 15:13:01 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.217 15:13:01 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.217 15:13:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.475 [2024-11-20 15:13:01.738005] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:15.475 [2024-11-20 15:13:01.738116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58653 ] 00:05:15.475 [2024-11-20 15:13:01.890798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.734 [2024-11-20 15:13:01.972976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:15.734 [2024-11-20 15:13:01.973039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58653' to capture a snapshot of events at runtime. 00:05:15.734 [2024-11-20 15:13:01.973050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.734 [2024-11-20 15:13:01.973060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.734 [2024-11-20 15:13:01.973066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58653 for offline analysis/debug. 00:05:15.734 [2024-11-20 15:13:01.973488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.302 15:13:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.302 15:13:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.302 15:13:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:16.302 15:13:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:16.302 15:13:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.302 15:13:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.302 15:13:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.302 15:13:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.302 15:13:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.302 ************************************ 00:05:16.302 START TEST rpc_integrity 00:05:16.302 ************************************ 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:16.302 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.302 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.302 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.302 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.302 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.302 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.302 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.302 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.562 { 00:05:16.562 "aliases": [ 00:05:16.562 "210d5cc0-fecb-499f-bf66-32b71c4ce803" 00:05:16.562 ], 00:05:16.562 "assigned_rate_limits": { 00:05:16.562 "r_mbytes_per_sec": 0, 00:05:16.562 "rw_ios_per_sec": 0, 00:05:16.562 "rw_mbytes_per_sec": 0, 00:05:16.562 "w_mbytes_per_sec": 0 00:05:16.562 }, 00:05:16.562 "block_size": 512, 00:05:16.562 "claimed": false, 00:05:16.562 "driver_specific": {}, 00:05:16.562 "memory_domains": [ 00:05:16.562 { 00:05:16.562 "dma_device_id": "system", 00:05:16.562 "dma_device_type": 1 00:05:16.562 }, 00:05:16.562 { 00:05:16.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.562 "dma_device_type": 2 00:05:16.562 } 00:05:16.562 ], 00:05:16.562 "name": "Malloc0", 00:05:16.562 "num_blocks": 16384, 00:05:16.562 "product_name": "Malloc disk", 00:05:16.562 "supported_io_types": { 00:05:16.562 "abort": true, 00:05:16.562 "compare": false, 00:05:16.562 "compare_and_write": false, 00:05:16.562 "copy": true, 00:05:16.562 "flush": true, 00:05:16.562 "get_zone_info": false, 00:05:16.562 "nvme_admin": false, 00:05:16.562 "nvme_io": false, 00:05:16.562 "nvme_io_md": false, 00:05:16.562 "nvme_iov_md": false, 00:05:16.562 "read": true, 00:05:16.562 "reset": true, 00:05:16.562 "seek_data": false, 00:05:16.562 "seek_hole": false, 00:05:16.562 "unmap": true, 00:05:16.562 "write": true, 00:05:16.562 "write_zeroes": true, 00:05:16.562 "zcopy": true, 00:05:16.562 "zone_append": false, 00:05:16.562 "zone_management": false 00:05:16.562 }, 00:05:16.562 "uuid": "210d5cc0-fecb-499f-bf66-32b71c4ce803", 00:05:16.562 "zoned": false 00:05:16.562 } 00:05:16.562 ]' 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.562 [2024-11-20 15:13:02.843836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.562 [2024-11-20 15:13:02.843890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.562 [2024-11-20 15:13:02.843915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa9c30 00:05:16.562 [2024-11-20 15:13:02.843924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.562 [2024-11-20 15:13:02.845595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.562 [2024-11-20 15:13:02.845633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.562 Passthru0 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.562 { 00:05:16.562 "aliases": [ 00:05:16.562 "210d5cc0-fecb-499f-bf66-32b71c4ce803" 00:05:16.562 ], 00:05:16.562 "assigned_rate_limits": { 00:05:16.562 "r_mbytes_per_sec": 0, 00:05:16.562 "rw_ios_per_sec": 0, 00:05:16.562 "rw_mbytes_per_sec": 0, 00:05:16.562 "w_mbytes_per_sec": 0 00:05:16.562 }, 00:05:16.562 "block_size": 512, 00:05:16.562 "claim_type": "exclusive_write", 00:05:16.562 "claimed": true, 00:05:16.562 "driver_specific": {}, 00:05:16.562 "memory_domains": [ 00:05:16.562 { 00:05:16.562 "dma_device_id": "system", 00:05:16.562 "dma_device_type": 1 00:05:16.562 }, 00:05:16.562 { 00:05:16.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.562 "dma_device_type": 2 00:05:16.562 } 00:05:16.562 ], 00:05:16.562 "name": "Malloc0", 00:05:16.562 "num_blocks": 16384, 00:05:16.562 "product_name": "Malloc disk", 00:05:16.562 "supported_io_types": { 00:05:16.562 "abort": true, 00:05:16.562 "compare": false, 00:05:16.562 "compare_and_write": false, 00:05:16.562 "copy": true, 00:05:16.562 "flush": true, 00:05:16.562 "get_zone_info": false, 00:05:16.562 "nvme_admin": false, 00:05:16.562 "nvme_io": false, 00:05:16.562 "nvme_io_md": false, 00:05:16.562 "nvme_iov_md": false, 00:05:16.562 "read": true, 00:05:16.562 "reset": true, 00:05:16.562 "seek_data": false, 00:05:16.562 "seek_hole": false, 00:05:16.562 "unmap": true, 00:05:16.562 "write": true, 00:05:16.562 "write_zeroes": true, 00:05:16.562 "zcopy": true, 00:05:16.562 "zone_append": false, 00:05:16.562 "zone_management": false 00:05:16.562 }, 00:05:16.562 "uuid": "210d5cc0-fecb-499f-bf66-32b71c4ce803", 00:05:16.562 "zoned": false 00:05:16.562 }, 00:05:16.562 { 00:05:16.562 "aliases": [ 00:05:16.562 "ce49559b-d7e9-58d3-bf53-93403bed93b1" 00:05:16.562 ], 00:05:16.562 "assigned_rate_limits": { 00:05:16.562 "r_mbytes_per_sec": 0, 00:05:16.562 "rw_ios_per_sec": 0, 00:05:16.562 "rw_mbytes_per_sec": 0, 00:05:16.562 "w_mbytes_per_sec": 0 00:05:16.562 }, 00:05:16.562 "block_size": 512, 00:05:16.562 "claimed": false, 00:05:16.562 "driver_specific": { 00:05:16.562 "passthru": { 00:05:16.562 "base_bdev_name": "Malloc0", 00:05:16.562 "name": "Passthru0" 00:05:16.562 } 00:05:16.562 }, 00:05:16.562 "memory_domains": [ 00:05:16.562 { 00:05:16.562 "dma_device_id": "system", 00:05:16.562 "dma_device_type": 1 00:05:16.562 }, 00:05:16.562 { 00:05:16.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.562 "dma_device_type": 2 00:05:16.562 } 00:05:16.562 ], 00:05:16.562 "name": "Passthru0", 00:05:16.562 "num_blocks": 16384, 00:05:16.562 "product_name": "passthru", 00:05:16.562 "supported_io_types": { 00:05:16.562 "abort": true, 00:05:16.562 "compare": false, 00:05:16.562 "compare_and_write": false, 00:05:16.562 "copy": true, 00:05:16.562 "flush": true, 00:05:16.562 "get_zone_info": false, 00:05:16.562 "nvme_admin": false, 00:05:16.562 "nvme_io": false, 00:05:16.562 "nvme_io_md": false, 00:05:16.562 "nvme_iov_md": false, 00:05:16.562 "read": true, 00:05:16.562 "reset": true, 00:05:16.562 "seek_data": false, 00:05:16.562 "seek_hole": false, 00:05:16.562 "unmap": true, 00:05:16.562 "write": true, 00:05:16.562 "write_zeroes": true, 00:05:16.562 "zcopy": true, 00:05:16.562 "zone_append": false, 00:05:16.562 "zone_management": false 00:05:16.562 }, 00:05:16.562 "uuid": "ce49559b-d7e9-58d3-bf53-93403bed93b1", 00:05:16.562 "zoned": false 00:05:16.562 } 00:05:16.562 ]' 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.562 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.562 15:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.563 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.563 15:13:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.563 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.563 00:05:16.563 real 0m0.345s 00:05:16.563 user 0m0.182s 00:05:16.563 sys 0m0.071s 00:05:16.563 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.563 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.563 ************************************ 00:05:16.563 END TEST rpc_integrity 00:05:16.563 ************************************ 00:05:16.822 15:13:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.822 15:13:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.822 15:13:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.822 15:13:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.822 ************************************ 00:05:16.822 START TEST rpc_plugins 00:05:16.822 ************************************ 00:05:16.822 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:16.822 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.822 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.822 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.822 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.822 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.822 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.822 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.822 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.822 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.822 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.822 { 00:05:16.822 "aliases": [ 00:05:16.822 "d85379be-cd64-48b1-8ff0-2058f159717c" 00:05:16.822 ], 00:05:16.822 "assigned_rate_limits": { 00:05:16.822 "r_mbytes_per_sec": 0, 00:05:16.822 "rw_ios_per_sec": 0, 00:05:16.822 "rw_mbytes_per_sec": 0, 00:05:16.822 "w_mbytes_per_sec": 0 00:05:16.822 }, 00:05:16.822 "block_size": 4096, 00:05:16.822 "claimed": false, 00:05:16.822 "driver_specific": {}, 00:05:16.822 "memory_domains": [ 00:05:16.822 { 00:05:16.822 "dma_device_id": "system", 00:05:16.822 "dma_device_type": 1 00:05:16.822 }, 00:05:16.822 { 00:05:16.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.822 "dma_device_type": 2 00:05:16.822 } 00:05:16.822 ], 00:05:16.822 "name": "Malloc1", 00:05:16.822 "num_blocks": 256, 00:05:16.822 "product_name": "Malloc disk", 00:05:16.822 "supported_io_types": { 00:05:16.822 "abort": true, 00:05:16.822 "compare": false, 00:05:16.822 "compare_and_write": false, 00:05:16.822 "copy": true, 00:05:16.822 "flush": true, 00:05:16.822 "get_zone_info": false, 00:05:16.822 "nvme_admin": false, 00:05:16.822 "nvme_io": false, 00:05:16.822 "nvme_io_md": false, 00:05:16.822 "nvme_iov_md": false, 00:05:16.823 "read": true, 00:05:16.823 "reset": true, 00:05:16.823 "seek_data": false, 00:05:16.823 "seek_hole": false, 00:05:16.823 "unmap": true, 00:05:16.823 "write": true, 00:05:16.823 "write_zeroes": true, 00:05:16.823 "zcopy": true, 00:05:16.823 "zone_append": false, 00:05:16.823 "zone_management": false 00:05:16.823 }, 00:05:16.823 "uuid": "d85379be-cd64-48b1-8ff0-2058f159717c", 00:05:16.823 "zoned": false 00:05:16.823 } 00:05:16.823 ]' 00:05:16.823 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:16.823 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.823 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.823 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.823 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.823 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.823 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:16.823 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.823 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.823 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.823 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:16.823 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:16.823 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:16.823 00:05:16.823 real 0m0.172s 00:05:16.823 user 0m0.090s 00:05:16.823 sys 0m0.036s 00:05:16.823 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.823 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.823 ************************************ 00:05:16.823 END TEST rpc_plugins 00:05:16.823 ************************************ 00:05:17.082 15:13:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.082 15:13:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.082 15:13:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.082 15:13:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.082 ************************************ 00:05:17.082 START TEST rpc_trace_cmd_test 00:05:17.082 ************************************ 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:17.082 "bdev": { 00:05:17.082 "mask": "0x8", 00:05:17.082 "tpoint_mask": "0xffffffffffffffff" 00:05:17.082 }, 00:05:17.082 "bdev_nvme": { 00:05:17.082 "mask": "0x4000", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "bdev_raid": { 00:05:17.082 "mask": "0x20000", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "blob": { 00:05:17.082 "mask": "0x10000", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "blobfs": { 00:05:17.082 "mask": "0x80", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "dsa": { 00:05:17.082 "mask": "0x200", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "ftl": { 00:05:17.082 "mask": "0x40", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "iaa": { 00:05:17.082 "mask": "0x1000", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "iscsi_conn": { 00:05:17.082 "mask": "0x2", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "nvme_pcie": { 00:05:17.082 "mask": "0x800", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "nvme_tcp": { 00:05:17.082 "mask": "0x2000", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "nvmf_rdma": { 00:05:17.082 "mask": "0x10", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "nvmf_tcp": { 00:05:17.082 "mask": "0x20", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "scheduler": { 00:05:17.082 "mask": "0x40000", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "scsi": { 00:05:17.082 "mask": "0x4", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "sock": { 00:05:17.082 "mask": "0x8000", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "thread": { 00:05:17.082 "mask": "0x400", 00:05:17.082 "tpoint_mask": "0x0" 00:05:17.082 }, 00:05:17.082 "tpoint_group_mask": "0x8", 00:05:17.082 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58653" 00:05:17.082 }' 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.082 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.083 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.083 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.083 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.083 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.083 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.343 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.343 00:05:17.343 real 0m0.218s 00:05:17.343 user 0m0.162s 00:05:17.343 sys 0m0.043s 00:05:17.343 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.343 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.343 ************************************ 00:05:17.343 END TEST rpc_trace_cmd_test 00:05:17.343 ************************************ 00:05:17.343 15:13:03 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:17.343 15:13:03 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:17.343 15:13:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.343 15:13:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.343 15:13:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.343 ************************************ 00:05:17.343 START TEST go_rpc 00:05:17.343 ************************************ 00:05:17.343 15:13:03 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.343 15:13:03 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.343 15:13:03 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.343 15:13:03 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["2a5022b1-5c73-4ddb-bc6c-2ad4456d8b98"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"2a5022b1-5c73-4ddb-bc6c-2ad4456d8b98","zoned":false}]' 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:17.343 15:13:03 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.344 15:13:03 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.344 15:13:03 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.344 15:13:03 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.344 15:13:03 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:17.344 15:13:03 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:17.344 15:13:03 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:17.603 15:13:03 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:17.603 00:05:17.603 real 0m0.224s 00:05:17.603 user 0m0.129s 00:05:17.603 sys 0m0.056s 00:05:17.603 15:13:03 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.603 15:13:03 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.603 ************************************ 00:05:17.603 END TEST go_rpc 00:05:17.603 ************************************ 00:05:17.603 15:13:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.603 15:13:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.603 15:13:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.603 15:13:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.603 15:13:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.603 ************************************ 00:05:17.603 START TEST rpc_daemon_integrity 00:05:17.603 ************************************ 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.603 15:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.603 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.603 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:17.603 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.603 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.603 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.603 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.603 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.603 { 00:05:17.603 "aliases": [ 00:05:17.603 "d23e501f-beb4-47c5-95bc-5171a12c9120" 00:05:17.603 ], 00:05:17.603 "assigned_rate_limits": { 00:05:17.603 "r_mbytes_per_sec": 0, 00:05:17.603 "rw_ios_per_sec": 0, 00:05:17.603 "rw_mbytes_per_sec": 0, 00:05:17.603 "w_mbytes_per_sec": 0 00:05:17.603 }, 00:05:17.603 "block_size": 512, 00:05:17.603 "claimed": false, 00:05:17.603 "driver_specific": {}, 00:05:17.603 "memory_domains": [ 00:05:17.603 { 00:05:17.603 "dma_device_id": "system", 00:05:17.603 "dma_device_type": 1 00:05:17.603 }, 00:05:17.603 { 00:05:17.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.603 "dma_device_type": 2 00:05:17.603 } 00:05:17.603 ], 00:05:17.603 "name": "Malloc3", 00:05:17.603 "num_blocks": 16384, 00:05:17.603 "product_name": "Malloc disk", 00:05:17.603 "supported_io_types": { 00:05:17.603 "abort": true, 00:05:17.603 "compare": false, 00:05:17.603 "compare_and_write": false, 00:05:17.603 "copy": true, 00:05:17.603 "flush": true, 00:05:17.603 "get_zone_info": false, 00:05:17.603 "nvme_admin": false, 00:05:17.603 "nvme_io": false, 00:05:17.603 "nvme_io_md": false, 00:05:17.603 "nvme_iov_md": false, 00:05:17.603 "read": true, 00:05:17.603 "reset": true, 00:05:17.603 "seek_data": false, 00:05:17.603 "seek_hole": false, 00:05:17.603 "unmap": true, 00:05:17.603 "write": true, 00:05:17.603 "write_zeroes": true, 00:05:17.603 "zcopy": true, 00:05:17.603 "zone_append": false, 00:05:17.603 "zone_management": false 00:05:17.603 }, 00:05:17.603 "uuid": "d23e501f-beb4-47c5-95bc-5171a12c9120", 00:05:17.603 "zoned": false 00:05:17.603 } 00:05:17.603 ]' 00:05:17.603 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.863 [2024-11-20 15:13:04.093517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:17.863 [2024-11-20 15:13:04.093738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.863 [2024-11-20 15:13:04.093769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc34700 00:05:17.863 [2024-11-20 15:13:04.093779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.863 [2024-11-20 15:13:04.095494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.863 [2024-11-20 15:13:04.095532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.863 Passthru0 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.863 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.863 { 00:05:17.863 "aliases": [ 00:05:17.863 "d23e501f-beb4-47c5-95bc-5171a12c9120" 00:05:17.863 ], 00:05:17.863 "assigned_rate_limits": { 00:05:17.863 "r_mbytes_per_sec": 0, 00:05:17.863 "rw_ios_per_sec": 0, 00:05:17.863 "rw_mbytes_per_sec": 0, 00:05:17.863 "w_mbytes_per_sec": 0 00:05:17.863 }, 00:05:17.863 "block_size": 512, 00:05:17.863 "claim_type": "exclusive_write", 00:05:17.863 "claimed": true, 00:05:17.863 "driver_specific": {}, 00:05:17.863 "memory_domains": [ 00:05:17.863 { 00:05:17.863 "dma_device_id": "system", 00:05:17.863 "dma_device_type": 1 00:05:17.863 }, 00:05:17.863 { 00:05:17.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.863 "dma_device_type": 2 00:05:17.863 } 00:05:17.863 ], 00:05:17.863 "name": "Malloc3", 00:05:17.863 "num_blocks": 16384, 00:05:17.863 "product_name": "Malloc disk", 00:05:17.863 "supported_io_types": { 00:05:17.863 "abort": true, 00:05:17.863 "compare": false, 00:05:17.863 "compare_and_write": false, 00:05:17.863 "copy": true, 00:05:17.863 "flush": true, 00:05:17.863 "get_zone_info": false, 00:05:17.863 "nvme_admin": false, 00:05:17.863 "nvme_io": false, 00:05:17.863 "nvme_io_md": false, 00:05:17.863 "nvme_iov_md": false, 00:05:17.863 "read": true, 00:05:17.863 "reset": true, 00:05:17.863 "seek_data": false, 00:05:17.863 "seek_hole": false, 00:05:17.863 "unmap": true, 00:05:17.863 "write": true, 00:05:17.863 "write_zeroes": true, 00:05:17.863 "zcopy": true, 00:05:17.863 "zone_append": false, 00:05:17.863 "zone_management": false 00:05:17.863 }, 00:05:17.863 "uuid": "d23e501f-beb4-47c5-95bc-5171a12c9120", 00:05:17.863 "zoned": false 00:05:17.863 }, 00:05:17.863 { 00:05:17.863 "aliases": [ 00:05:17.863 "9247d70f-af92-5be6-969c-ea99e2717949" 00:05:17.863 ], 00:05:17.863 "assigned_rate_limits": { 00:05:17.863 "r_mbytes_per_sec": 0, 00:05:17.863 "rw_ios_per_sec": 0, 00:05:17.863 "rw_mbytes_per_sec": 0, 00:05:17.863 "w_mbytes_per_sec": 0 00:05:17.863 }, 00:05:17.863 "block_size": 512, 00:05:17.863 "claimed": false, 00:05:17.863 "driver_specific": { 00:05:17.863 "passthru": { 00:05:17.863 "base_bdev_name": "Malloc3", 00:05:17.863 "name": "Passthru0" 00:05:17.863 } 00:05:17.864 }, 00:05:17.864 "memory_domains": [ 00:05:17.864 { 00:05:17.864 "dma_device_id": "system", 00:05:17.864 "dma_device_type": 1 00:05:17.864 }, 00:05:17.864 { 00:05:17.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.864 "dma_device_type": 2 00:05:17.864 } 00:05:17.864 ], 00:05:17.864 "name": "Passthru0", 00:05:17.864 "num_blocks": 16384, 00:05:17.864 "product_name": "passthru", 00:05:17.864 "supported_io_types": { 00:05:17.864 "abort": true, 00:05:17.864 "compare": false, 00:05:17.864 "compare_and_write": false, 00:05:17.864 "copy": true, 00:05:17.864 "flush": true, 00:05:17.864 "get_zone_info": false, 00:05:17.864 "nvme_admin": false, 00:05:17.864 "nvme_io": false, 00:05:17.864 "nvme_io_md": false, 00:05:17.864 "nvme_iov_md": false, 00:05:17.864 "read": true, 00:05:17.864 "reset": true, 00:05:17.864 "seek_data": false, 00:05:17.864 "seek_hole": false, 00:05:17.864 "unmap": true, 00:05:17.864 "write": true, 00:05:17.864 "write_zeroes": true, 00:05:17.864 "zcopy": true, 00:05:17.864 "zone_append": false, 00:05:17.864 "zone_management": false 00:05:17.864 }, 00:05:17.864 "uuid": "9247d70f-af92-5be6-969c-ea99e2717949", 00:05:17.864 "zoned": false 00:05:17.864 } 00:05:17.864 ]' 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.864 00:05:17.864 real 0m0.343s 00:05:17.864 user 0m0.196s 00:05:17.864 sys 0m0.057s 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.864 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.864 ************************************ 00:05:17.864 END TEST rpc_daemon_integrity 00:05:17.864 ************************************ 00:05:17.864 15:13:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.864 15:13:04 rpc -- rpc/rpc.sh@84 -- # killprocess 58653 00:05:17.864 15:13:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 58653 ']' 00:05:17.864 15:13:04 rpc -- common/autotest_common.sh@958 -- # kill -0 58653 00:05:17.864 15:13:04 rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.864 15:13:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.864 15:13:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58653 00:05:18.121 15:13:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.121 15:13:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.121 15:13:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58653' 00:05:18.121 killing process with pid 58653 00:05:18.121 15:13:04 rpc -- common/autotest_common.sh@973 -- # kill 58653 00:05:18.121 15:13:04 rpc -- common/autotest_common.sh@978 -- # wait 58653 00:05:18.687 ************************************ 00:05:18.687 END TEST rpc 00:05:18.687 ************************************ 00:05:18.687 00:05:18.687 real 0m3.467s 00:05:18.687 user 0m4.120s 00:05:18.687 sys 0m1.144s 00:05:18.687 15:13:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.687 15:13:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.687 15:13:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:18.687 15:13:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.687 15:13:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.687 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:18.687 ************************************ 00:05:18.688 START TEST skip_rpc 00:05:18.688 ************************************ 00:05:18.688 15:13:04 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:18.688 * Looking for test storage... 00:05:18.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.688 15:13:05 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.688 15:13:05 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.688 15:13:05 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.946 15:13:05 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.946 15:13:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:18.946 15:13:05 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.946 15:13:05 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.946 --rc genhtml_branch_coverage=1 00:05:18.946 --rc genhtml_function_coverage=1 00:05:18.946 --rc genhtml_legend=1 00:05:18.946 --rc geninfo_all_blocks=1 00:05:18.946 --rc geninfo_unexecuted_blocks=1 00:05:18.946 00:05:18.946 ' 00:05:18.946 15:13:05 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.946 --rc genhtml_branch_coverage=1 00:05:18.946 --rc genhtml_function_coverage=1 00:05:18.946 --rc genhtml_legend=1 00:05:18.946 --rc geninfo_all_blocks=1 00:05:18.946 --rc geninfo_unexecuted_blocks=1 00:05:18.946 00:05:18.946 ' 00:05:18.946 15:13:05 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.946 --rc genhtml_branch_coverage=1 00:05:18.946 --rc genhtml_function_coverage=1 00:05:18.946 --rc genhtml_legend=1 00:05:18.946 --rc geninfo_all_blocks=1 00:05:18.946 --rc geninfo_unexecuted_blocks=1 00:05:18.946 00:05:18.946 ' 00:05:18.946 15:13:05 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.946 --rc genhtml_branch_coverage=1 00:05:18.946 --rc genhtml_function_coverage=1 00:05:18.946 --rc genhtml_legend=1 00:05:18.946 --rc geninfo_all_blocks=1 00:05:18.946 --rc geninfo_unexecuted_blocks=1 00:05:18.946 00:05:18.946 ' 00:05:18.946 15:13:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.946 15:13:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:18.946 15:13:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:18.947 15:13:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.947 15:13:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.947 15:13:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.947 ************************************ 00:05:18.947 START TEST skip_rpc 00:05:18.947 ************************************ 00:05:18.947 15:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:18.947 15:13:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58923 00:05:18.947 15:13:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.947 15:13:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.947 15:13:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.947 [2024-11-20 15:13:05.282807] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:18.947 [2024-11-20 15:13:05.283088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58923 ] 00:05:19.205 [2024-11-20 15:13:05.437466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.205 [2024-11-20 15:13:05.519621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.477 2024/11/20 15:13:10 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58923 00:05:24.477 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58923 ']' 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58923 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58923 00:05:24.478 killing process with pid 58923 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58923' 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58923 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58923 00:05:24.478 00:05:24.478 real 0m5.603s 00:05:24.478 user 0m5.124s 00:05:24.478 sys 0m0.409s 00:05:24.478 ************************************ 00:05:24.478 END TEST skip_rpc 00:05:24.478 ************************************ 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.478 15:13:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.478 15:13:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:24.478 15:13:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.478 15:13:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.478 15:13:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.478 ************************************ 00:05:24.478 START TEST skip_rpc_with_json 00:05:24.478 ************************************ 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59016 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59016 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59016 ']' 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.478 15:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.737 [2024-11-20 15:13:10.957512] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:24.737 [2024-11-20 15:13:10.957612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59016 ] 00:05:24.737 [2024-11-20 15:13:11.111779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.737 [2024-11-20 15:13:11.193920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.673 [2024-11-20 15:13:11.890610] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:25.673 2024/11/20 15:13:11 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:25.673 request: 00:05:25.673 { 00:05:25.673 "method": "nvmf_get_transports", 00:05:25.673 "params": { 00:05:25.673 "trtype": "tcp" 00:05:25.673 } 00:05:25.673 } 00:05:25.673 Got JSON-RPC error response 00:05:25.673 GoRPCClient: error on JSON-RPC call 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.673 [2024-11-20 15:13:11.906688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.673 15:13:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:25.674 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.674 15:13:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.674 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.674 15:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:25.674 { 00:05:25.674 "subsystems": [ 00:05:25.674 { 00:05:25.674 "subsystem": "fsdev", 00:05:25.674 "config": [ 00:05:25.674 { 00:05:25.674 "method": "fsdev_set_opts", 00:05:25.674 "params": { 00:05:25.674 "fsdev_io_cache_size": 256, 00:05:25.674 "fsdev_io_pool_size": 65535 00:05:25.674 } 00:05:25.674 } 00:05:25.674 ] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "keyring", 00:05:25.674 "config": [] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "iobuf", 00:05:25.674 "config": [ 00:05:25.674 { 00:05:25.674 "method": "iobuf_set_options", 00:05:25.674 "params": { 00:05:25.674 "enable_numa": false, 00:05:25.674 "large_bufsize": 135168, 00:05:25.674 "large_pool_count": 1024, 00:05:25.674 "small_bufsize": 8192, 00:05:25.674 "small_pool_count": 8192 00:05:25.674 } 00:05:25.674 } 00:05:25.674 ] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "sock", 00:05:25.674 "config": [ 00:05:25.674 { 00:05:25.674 "method": "sock_set_default_impl", 00:05:25.674 "params": { 00:05:25.674 "impl_name": "posix" 00:05:25.674 } 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "method": "sock_impl_set_options", 00:05:25.674 "params": { 00:05:25.674 "enable_ktls": false, 00:05:25.674 "enable_placement_id": 0, 00:05:25.674 "enable_quickack": false, 00:05:25.674 "enable_recv_pipe": true, 00:05:25.674 "enable_zerocopy_send_client": false, 00:05:25.674 "enable_zerocopy_send_server": true, 00:05:25.674 "impl_name": "ssl", 00:05:25.674 "recv_buf_size": 4096, 00:05:25.674 "send_buf_size": 4096, 00:05:25.674 "tls_version": 0, 00:05:25.674 "zerocopy_threshold": 0 00:05:25.674 } 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "method": "sock_impl_set_options", 00:05:25.674 "params": { 00:05:25.674 "enable_ktls": false, 00:05:25.674 "enable_placement_id": 0, 00:05:25.674 "enable_quickack": false, 00:05:25.674 "enable_recv_pipe": true, 00:05:25.674 "enable_zerocopy_send_client": false, 00:05:25.674 "enable_zerocopy_send_server": true, 00:05:25.674 "impl_name": "posix", 00:05:25.674 "recv_buf_size": 2097152, 00:05:25.674 "send_buf_size": 2097152, 00:05:25.674 "tls_version": 0, 00:05:25.674 "zerocopy_threshold": 0 00:05:25.674 } 00:05:25.674 } 00:05:25.674 ] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "vmd", 00:05:25.674 "config": [] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "accel", 00:05:25.674 "config": [ 00:05:25.674 { 00:05:25.674 "method": "accel_set_options", 00:05:25.674 "params": { 00:05:25.674 "buf_count": 2048, 00:05:25.674 "large_cache_size": 16, 00:05:25.674 "sequence_count": 2048, 00:05:25.674 "small_cache_size": 128, 00:05:25.674 "task_count": 2048 00:05:25.674 } 00:05:25.674 } 00:05:25.674 ] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "bdev", 00:05:25.674 "config": [ 00:05:25.674 { 00:05:25.674 "method": "bdev_set_options", 00:05:25.674 "params": { 00:05:25.674 "bdev_auto_examine": true, 00:05:25.674 "bdev_io_cache_size": 256, 00:05:25.674 "bdev_io_pool_size": 65535, 00:05:25.674 "iobuf_large_cache_size": 16, 00:05:25.674 "iobuf_small_cache_size": 128 00:05:25.674 } 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "method": "bdev_raid_set_options", 00:05:25.674 "params": { 00:05:25.674 "process_max_bandwidth_mb_sec": 0, 00:05:25.674 "process_window_size_kb": 1024 00:05:25.674 } 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "method": "bdev_iscsi_set_options", 00:05:25.674 "params": { 00:05:25.674 "timeout_sec": 30 00:05:25.674 } 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "method": "bdev_nvme_set_options", 00:05:25.674 "params": { 00:05:25.674 "action_on_timeout": "none", 00:05:25.674 "allow_accel_sequence": false, 00:05:25.674 "arbitration_burst": 0, 00:05:25.674 "bdev_retry_count": 3, 00:05:25.674 "ctrlr_loss_timeout_sec": 0, 00:05:25.674 "delay_cmd_submit": true, 00:05:25.674 "dhchap_dhgroups": [ 00:05:25.674 "null", 00:05:25.674 "ffdhe2048", 00:05:25.674 "ffdhe3072", 00:05:25.674 "ffdhe4096", 00:05:25.674 "ffdhe6144", 00:05:25.674 "ffdhe8192" 00:05:25.674 ], 00:05:25.674 "dhchap_digests": [ 00:05:25.674 "sha256", 00:05:25.674 "sha384", 00:05:25.674 "sha512" 00:05:25.674 ], 00:05:25.674 "disable_auto_failback": false, 00:05:25.674 "fast_io_fail_timeout_sec": 0, 00:05:25.674 "generate_uuids": false, 00:05:25.674 "high_priority_weight": 0, 00:05:25.674 "io_path_stat": false, 00:05:25.674 "io_queue_requests": 0, 00:05:25.674 "keep_alive_timeout_ms": 10000, 00:05:25.674 "low_priority_weight": 0, 00:05:25.674 "medium_priority_weight": 0, 00:05:25.674 "nvme_adminq_poll_period_us": 10000, 00:05:25.674 "nvme_error_stat": false, 00:05:25.674 "nvme_ioq_poll_period_us": 0, 00:05:25.674 "rdma_cm_event_timeout_ms": 0, 00:05:25.674 "rdma_max_cq_size": 0, 00:05:25.674 "rdma_srq_size": 0, 00:05:25.674 "reconnect_delay_sec": 0, 00:05:25.674 "timeout_admin_us": 0, 00:05:25.674 "timeout_us": 0, 00:05:25.674 "transport_ack_timeout": 0, 00:05:25.674 "transport_retry_count": 4, 00:05:25.674 "transport_tos": 0 00:05:25.674 } 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "method": "bdev_nvme_set_hotplug", 00:05:25.674 "params": { 00:05:25.674 "enable": false, 00:05:25.674 "period_us": 100000 00:05:25.674 } 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "method": "bdev_wait_for_examine" 00:05:25.674 } 00:05:25.674 ] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "scsi", 00:05:25.674 "config": null 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "scheduler", 00:05:25.674 "config": [ 00:05:25.674 { 00:05:25.674 "method": "framework_set_scheduler", 00:05:25.674 "params": { 00:05:25.674 "name": "static" 00:05:25.674 } 00:05:25.674 } 00:05:25.674 ] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "vhost_scsi", 00:05:25.674 "config": [] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "vhost_blk", 00:05:25.674 "config": [] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "ublk", 00:05:25.674 "config": [] 00:05:25.674 }, 00:05:25.674 { 00:05:25.674 "subsystem": "nbd", 00:05:25.674 "config": [] 00:05:25.674 }, 00:05:25.675 { 00:05:25.675 "subsystem": "nvmf", 00:05:25.675 "config": [ 00:05:25.675 { 00:05:25.675 "method": "nvmf_set_config", 00:05:25.675 "params": { 00:05:25.675 "admin_cmd_passthru": { 00:05:25.675 "identify_ctrlr": false 00:05:25.675 }, 00:05:25.675 "dhchap_dhgroups": [ 00:05:25.675 "null", 00:05:25.675 "ffdhe2048", 00:05:25.675 "ffdhe3072", 00:05:25.675 "ffdhe4096", 00:05:25.675 "ffdhe6144", 00:05:25.675 "ffdhe8192" 00:05:25.675 ], 00:05:25.675 "dhchap_digests": [ 00:05:25.675 "sha256", 00:05:25.675 "sha384", 00:05:25.675 "sha512" 00:05:25.675 ], 00:05:25.675 "discovery_filter": "match_any" 00:05:25.675 } 00:05:25.675 }, 00:05:25.675 { 00:05:25.675 "method": "nvmf_set_max_subsystems", 00:05:25.675 "params": { 00:05:25.675 "max_subsystems": 1024 00:05:25.675 } 00:05:25.675 }, 00:05:25.675 { 00:05:25.675 "method": "nvmf_set_crdt", 00:05:25.675 "params": { 00:05:25.675 "crdt1": 0, 00:05:25.675 "crdt2": 0, 00:05:25.675 "crdt3": 0 00:05:25.675 } 00:05:25.675 }, 00:05:25.675 { 00:05:25.675 "method": "nvmf_create_transport", 00:05:25.675 "params": { 00:05:25.675 "abort_timeout_sec": 1, 00:05:25.675 "ack_timeout": 0, 00:05:25.675 "buf_cache_size": 4294967295, 00:05:25.675 "c2h_success": true, 00:05:25.675 "data_wr_pool_size": 0, 00:05:25.675 "dif_insert_or_strip": false, 00:05:25.675 "in_capsule_data_size": 4096, 00:05:25.675 "io_unit_size": 131072, 00:05:25.675 "max_aq_depth": 128, 00:05:25.675 "max_io_qpairs_per_ctrlr": 127, 00:05:25.675 "max_io_size": 131072, 00:05:25.675 "max_queue_depth": 128, 00:05:25.675 "num_shared_buffers": 511, 00:05:25.675 "sock_priority": 0, 00:05:25.675 "trtype": "TCP", 00:05:25.675 "zcopy": false 00:05:25.675 } 00:05:25.675 } 00:05:25.675 ] 00:05:25.675 }, 00:05:25.675 { 00:05:25.675 "subsystem": "iscsi", 00:05:25.675 "config": [ 00:05:25.675 { 00:05:25.675 "method": "iscsi_set_options", 00:05:25.675 "params": { 00:05:25.675 "allow_duplicated_isid": false, 00:05:25.675 "chap_group": 0, 00:05:25.675 "data_out_pool_size": 2048, 00:05:25.675 "default_time2retain": 20, 00:05:25.675 "default_time2wait": 2, 00:05:25.675 "disable_chap": false, 00:05:25.675 "error_recovery_level": 0, 00:05:25.675 "first_burst_length": 8192, 00:05:25.675 "immediate_data": true, 00:05:25.675 "immediate_data_pool_size": 16384, 00:05:25.675 "max_connections_per_session": 2, 00:05:25.675 "max_large_datain_per_connection": 64, 00:05:25.675 "max_queue_depth": 64, 00:05:25.675 "max_r2t_per_connection": 4, 00:05:25.675 "max_sessions": 128, 00:05:25.675 "mutual_chap": false, 00:05:25.675 "node_base": "iqn.2016-06.io.spdk", 00:05:25.675 "nop_in_interval": 30, 00:05:25.675 "nop_timeout": 60, 00:05:25.675 "pdu_pool_size": 36864, 00:05:25.675 "require_chap": false 00:05:25.675 } 00:05:25.675 } 00:05:25.675 ] 00:05:25.675 } 00:05:25.675 ] 00:05:25.675 } 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59016 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59016 ']' 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59016 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59016 00:05:25.675 killing process with pid 59016 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59016' 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59016 00:05:25.675 15:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59016 00:05:26.243 15:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59061 00:05:26.243 15:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.243 15:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59061 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59061 ']' 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59061 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59061 00:05:31.585 killing process with pid 59061 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59061' 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59061 00:05:31.585 15:13:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59061 00:05:31.845 15:13:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.845 15:13:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.845 ************************************ 00:05:31.845 END TEST skip_rpc_with_json 00:05:31.845 ************************************ 00:05:31.845 00:05:31.845 real 0m7.361s 00:05:31.845 user 0m6.834s 00:05:31.845 sys 0m0.920s 00:05:31.845 15:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.845 15:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.845 15:13:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:31.845 15:13:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.845 15:13:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.845 15:13:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.105 ************************************ 00:05:32.105 START TEST skip_rpc_with_delay 00:05:32.105 ************************************ 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.105 [2024-11-20 15:13:18.398074] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.105 00:05:32.105 real 0m0.090s 00:05:32.105 user 0m0.045s 00:05:32.105 sys 0m0.044s 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.105 15:13:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:32.105 ************************************ 00:05:32.105 END TEST skip_rpc_with_delay 00:05:32.105 ************************************ 00:05:32.105 15:13:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:32.105 15:13:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:32.105 15:13:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:32.105 15:13:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.105 15:13:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.105 15:13:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.105 ************************************ 00:05:32.105 START TEST exit_on_failed_rpc_init 00:05:32.105 ************************************ 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59169 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59169 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59169 ']' 00:05:32.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.105 15:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.105 [2024-11-20 15:13:18.560387] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:32.105 [2024-11-20 15:13:18.560488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59169 ] 00:05:32.364 [2024-11-20 15:13:18.710122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.364 [2024-11-20 15:13:18.791783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:33.301 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.301 [2024-11-20 15:13:19.543562] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:33.301 [2024-11-20 15:13:19.543805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59195 ] 00:05:33.301 [2024-11-20 15:13:19.692735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.301 [2024-11-20 15:13:19.771865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.301 [2024-11-20 15:13:19.772105] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:33.301 [2024-11-20 15:13:19.772124] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:33.301 [2024-11-20 15:13:19.772132] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59169 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59169 ']' 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59169 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59169 00:05:33.561 killing process with pid 59169 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59169' 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59169 00:05:33.561 15:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59169 00:05:34.152 00:05:34.152 real 0m1.933s 00:05:34.152 user 0m2.051s 00:05:34.152 sys 0m0.563s 00:05:34.152 ************************************ 00:05:34.152 END TEST exit_on_failed_rpc_init 00:05:34.152 ************************************ 00:05:34.152 15:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.152 15:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.152 15:13:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.152 00:05:34.152 real 0m15.518s 00:05:34.152 user 0m14.282s 00:05:34.152 sys 0m2.245s 00:05:34.152 15:13:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.152 ************************************ 00:05:34.152 END TEST skip_rpc 00:05:34.152 ************************************ 00:05:34.152 15:13:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.152 15:13:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.152 15:13:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.152 15:13:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.152 15:13:20 -- common/autotest_common.sh@10 -- # set +x 00:05:34.152 ************************************ 00:05:34.152 START TEST rpc_client 00:05:34.152 ************************************ 00:05:34.152 15:13:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.432 * Looking for test storage... 00:05:34.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.432 15:13:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.432 --rc genhtml_branch_coverage=1 00:05:34.432 --rc genhtml_function_coverage=1 00:05:34.432 --rc genhtml_legend=1 00:05:34.432 --rc geninfo_all_blocks=1 00:05:34.432 --rc geninfo_unexecuted_blocks=1 00:05:34.432 00:05:34.432 ' 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.432 --rc genhtml_branch_coverage=1 00:05:34.432 --rc genhtml_function_coverage=1 00:05:34.432 --rc genhtml_legend=1 00:05:34.432 --rc geninfo_all_blocks=1 00:05:34.432 --rc geninfo_unexecuted_blocks=1 00:05:34.432 00:05:34.432 ' 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.432 --rc genhtml_branch_coverage=1 00:05:34.432 --rc genhtml_function_coverage=1 00:05:34.432 --rc genhtml_legend=1 00:05:34.432 --rc geninfo_all_blocks=1 00:05:34.432 --rc geninfo_unexecuted_blocks=1 00:05:34.432 00:05:34.432 ' 00:05:34.432 15:13:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.432 --rc genhtml_branch_coverage=1 00:05:34.432 --rc genhtml_function_coverage=1 00:05:34.432 --rc genhtml_legend=1 00:05:34.432 --rc geninfo_all_blocks=1 00:05:34.432 --rc geninfo_unexecuted_blocks=1 00:05:34.432 00:05:34.432 ' 00:05:34.432 15:13:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:34.432 OK 00:05:34.433 15:13:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.433 00:05:34.433 real 0m0.263s 00:05:34.433 user 0m0.139s 00:05:34.433 sys 0m0.140s 00:05:34.433 15:13:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.433 15:13:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:34.433 ************************************ 00:05:34.433 END TEST rpc_client 00:05:34.433 ************************************ 00:05:34.433 15:13:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.433 15:13:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.433 15:13:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.433 15:13:20 -- common/autotest_common.sh@10 -- # set +x 00:05:34.433 ************************************ 00:05:34.433 START TEST json_config 00:05:34.433 ************************************ 00:05:34.433 15:13:20 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.692 15:13:20 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.692 15:13:20 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.692 15:13:20 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.692 15:13:21 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.692 15:13:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.692 15:13:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.692 15:13:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.692 15:13:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.692 15:13:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.692 15:13:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.692 15:13:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.692 15:13:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.692 15:13:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.692 15:13:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.692 15:13:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.692 15:13:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:34.692 15:13:21 json_config -- scripts/common.sh@345 -- # : 1 00:05:34.692 15:13:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.692 15:13:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.692 15:13:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:34.692 15:13:21 json_config -- scripts/common.sh@353 -- # local d=1 00:05:34.692 15:13:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.692 15:13:21 json_config -- scripts/common.sh@355 -- # echo 1 00:05:34.692 15:13:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.692 15:13:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:34.692 15:13:21 json_config -- scripts/common.sh@353 -- # local d=2 00:05:34.692 15:13:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.692 15:13:21 json_config -- scripts/common.sh@355 -- # echo 2 00:05:34.692 15:13:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.692 15:13:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.692 15:13:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.692 15:13:21 json_config -- scripts/common.sh@368 -- # return 0 00:05:34.692 15:13:21 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.692 15:13:21 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.692 --rc genhtml_branch_coverage=1 00:05:34.692 --rc genhtml_function_coverage=1 00:05:34.692 --rc genhtml_legend=1 00:05:34.692 --rc geninfo_all_blocks=1 00:05:34.692 --rc geninfo_unexecuted_blocks=1 00:05:34.692 00:05:34.692 ' 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.693 --rc genhtml_branch_coverage=1 00:05:34.693 --rc genhtml_function_coverage=1 00:05:34.693 --rc genhtml_legend=1 00:05:34.693 --rc geninfo_all_blocks=1 00:05:34.693 --rc geninfo_unexecuted_blocks=1 00:05:34.693 00:05:34.693 ' 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.693 --rc genhtml_branch_coverage=1 00:05:34.693 --rc genhtml_function_coverage=1 00:05:34.693 --rc genhtml_legend=1 00:05:34.693 --rc geninfo_all_blocks=1 00:05:34.693 --rc geninfo_unexecuted_blocks=1 00:05:34.693 00:05:34.693 ' 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.693 --rc genhtml_branch_coverage=1 00:05:34.693 --rc genhtml_function_coverage=1 00:05:34.693 --rc genhtml_legend=1 00:05:34.693 --rc geninfo_all_blocks=1 00:05:34.693 --rc geninfo_unexecuted_blocks=1 00:05:34.693 00:05:34.693 ' 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.693 15:13:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.693 15:13:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.693 15:13:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.693 15:13:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.693 15:13:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.693 15:13:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.693 15:13:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.693 15:13:21 json_config -- paths/export.sh@5 -- # export PATH 00:05:34.693 15:13:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@51 -- # : 0 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.693 15:13:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:34.693 INFO: JSON configuration test init 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.693 Waiting for target to run... 00:05:34.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.693 15:13:21 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:34.693 15:13:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:34.693 15:13:21 json_config -- json_config/common.sh@10 -- # shift 00:05:34.693 15:13:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.693 15:13:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.693 15:13:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.693 15:13:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.693 15:13:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.693 15:13:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59340 00:05:34.693 15:13:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.693 15:13:21 json_config -- json_config/common.sh@25 -- # waitforlisten 59340 /var/tmp/spdk_tgt.sock 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 59340 ']' 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.693 15:13:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.693 15:13:21 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:34.694 15:13:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.694 15:13:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.952 [2024-11-20 15:13:21.222729] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:34.952 [2024-11-20 15:13:21.223057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:05:35.520 [2024-11-20 15:13:21.773566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.520 [2024-11-20 15:13:21.836782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.778 15:13:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.778 15:13:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:35.778 15:13:22 json_config -- json_config/common.sh@26 -- # echo '' 00:05:35.778 00:05:35.778 15:13:22 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:35.778 15:13:22 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:35.778 15:13:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.778 15:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.778 15:13:22 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:35.778 15:13:22 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:35.778 15:13:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.778 15:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.778 15:13:22 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:35.778 15:13:22 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:35.778 15:13:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:36.345 15:13:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.345 15:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:36.345 15:13:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:36.345 15:13:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@54 -- # sort 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:36.604 15:13:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:36.604 15:13:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:36.604 15:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.604 15:13:23 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:36.604 15:13:23 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:36.604 15:13:23 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:36.604 15:13:23 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:36.604 15:13:23 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:36.604 15:13:23 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:36.604 15:13:23 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:36.604 15:13:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.604 15:13:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.604 15:13:23 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:36.605 15:13:23 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:36.605 15:13:23 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:36.605 15:13:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.605 15:13:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.863 MallocForNvmf0 00:05:36.863 15:13:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:36.863 15:13:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.122 MallocForNvmf1 00:05:37.122 15:13:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.122 15:13:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.381 [2024-11-20 15:13:23.764527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.381 15:13:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.381 15:13:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.640 15:13:24 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.640 15:13:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.899 15:13:24 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:37.899 15:13:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.157 15:13:24 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.157 15:13:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.415 [2024-11-20 15:13:24.787476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.415 15:13:24 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:38.415 15:13:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.415 15:13:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.415 15:13:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:38.416 15:13:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.416 15:13:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.674 15:13:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:38.674 15:13:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.674 15:13:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.965 MallocBdevForConfigChangeCheck 00:05:38.965 15:13:25 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:38.965 15:13:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.965 15:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.965 15:13:25 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:38.965 15:13:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.227 INFO: shutting down applications... 00:05:39.227 15:13:25 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:39.227 15:13:25 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:39.227 15:13:25 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:39.227 15:13:25 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:39.227 15:13:25 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:39.484 Calling clear_iscsi_subsystem 00:05:39.484 Calling clear_nvmf_subsystem 00:05:39.484 Calling clear_nbd_subsystem 00:05:39.484 Calling clear_ublk_subsystem 00:05:39.484 Calling clear_vhost_blk_subsystem 00:05:39.484 Calling clear_vhost_scsi_subsystem 00:05:39.484 Calling clear_bdev_subsystem 00:05:39.484 15:13:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:39.484 15:13:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:39.484 15:13:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:39.484 15:13:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.484 15:13:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:39.484 15:13:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:40.049 15:13:26 json_config -- json_config/json_config.sh@352 -- # break 00:05:40.049 15:13:26 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:40.049 15:13:26 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:40.049 15:13:26 json_config -- json_config/common.sh@31 -- # local app=target 00:05:40.049 15:13:26 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.049 15:13:26 json_config -- json_config/common.sh@35 -- # [[ -n 59340 ]] 00:05:40.049 15:13:26 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59340 00:05:40.049 15:13:26 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.049 15:13:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.049 15:13:26 json_config -- json_config/common.sh@41 -- # kill -0 59340 00:05:40.049 15:13:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.614 15:13:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.614 15:13:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.614 15:13:26 json_config -- json_config/common.sh@41 -- # kill -0 59340 00:05:40.614 15:13:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.614 15:13:26 json_config -- json_config/common.sh@43 -- # break 00:05:40.614 15:13:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.614 SPDK target shutdown done 00:05:40.614 15:13:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.614 INFO: relaunching applications... 00:05:40.614 15:13:26 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:40.614 15:13:26 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.614 15:13:26 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.614 15:13:26 json_config -- json_config/common.sh@10 -- # shift 00:05:40.614 15:13:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.614 15:13:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.614 15:13:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.614 15:13:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.614 15:13:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.614 15:13:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59615 00:05:40.614 15:13:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.614 Waiting for target to run... 00:05:40.614 15:13:26 json_config -- json_config/common.sh@25 -- # waitforlisten 59615 /var/tmp/spdk_tgt.sock 00:05:40.614 15:13:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 59615 ']' 00:05:40.614 15:13:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.614 15:13:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.614 15:13:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.614 15:13:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.614 15:13:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.614 15:13:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.614 [2024-11-20 15:13:26.937971] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:40.614 [2024-11-20 15:13:26.938668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 00:05:41.177 [2024-11-20 15:13:27.480486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.177 [2024-11-20 15:13:27.543925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.434 [2024-11-20 15:13:27.903634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.693 [2024-11-20 15:13:27.935705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.693 15:13:27 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.693 00:05:41.693 15:13:27 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:41.693 15:13:27 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.693 INFO: Checking if target configuration is the same... 00:05:41.693 15:13:27 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:41.693 15:13:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.693 15:13:27 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:41.693 15:13:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:41.693 15:13:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.693 + '[' 2 -ne 2 ']' 00:05:41.693 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:41.693 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:41.693 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:41.693 +++ basename /dev/fd/62 00:05:41.693 ++ mktemp /tmp/62.XXX 00:05:41.693 + tmp_file_1=/tmp/62.nM5 00:05:41.693 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:41.693 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.693 + tmp_file_2=/tmp/spdk_tgt_config.json.89Z 00:05:41.693 + ret=0 00:05:41.693 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:41.951 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:41.951 + diff -u /tmp/62.nM5 /tmp/spdk_tgt_config.json.89Z 00:05:42.210 INFO: JSON config files are the same 00:05:42.210 + echo 'INFO: JSON config files are the same' 00:05:42.210 + rm /tmp/62.nM5 /tmp/spdk_tgt_config.json.89Z 00:05:42.210 + exit 0 00:05:42.210 15:13:28 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:42.210 INFO: changing configuration and checking if this can be detected... 00:05:42.210 15:13:28 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:42.210 15:13:28 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.210 15:13:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.469 15:13:28 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:42.469 15:13:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:42.469 15:13:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.469 + '[' 2 -ne 2 ']' 00:05:42.469 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:42.469 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:42.469 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:42.469 +++ basename /dev/fd/62 00:05:42.469 ++ mktemp /tmp/62.XXX 00:05:42.469 + tmp_file_1=/tmp/62.PZk 00:05:42.469 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:42.469 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.470 + tmp_file_2=/tmp/spdk_tgt_config.json.HiJ 00:05:42.470 + ret=0 00:05:42.470 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:42.729 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:42.729 + diff -u /tmp/62.PZk /tmp/spdk_tgt_config.json.HiJ 00:05:42.729 + ret=1 00:05:42.729 + echo '=== Start of file: /tmp/62.PZk ===' 00:05:42.729 + cat /tmp/62.PZk 00:05:42.729 + echo '=== End of file: /tmp/62.PZk ===' 00:05:42.729 + echo '' 00:05:42.730 + echo '=== Start of file: /tmp/spdk_tgt_config.json.HiJ ===' 00:05:42.730 + cat /tmp/spdk_tgt_config.json.HiJ 00:05:42.730 + echo '=== End of file: /tmp/spdk_tgt_config.json.HiJ ===' 00:05:42.730 + echo '' 00:05:42.730 + rm /tmp/62.PZk /tmp/spdk_tgt_config.json.HiJ 00:05:42.730 + exit 1 00:05:42.730 INFO: configuration change detected. 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:42.730 15:13:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.730 15:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@324 -- # [[ -n 59615 ]] 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:42.730 15:13:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.730 15:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:42.730 15:13:29 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:42.730 15:13:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.730 15:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.988 15:13:29 json_config -- json_config/json_config.sh@330 -- # killprocess 59615 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@954 -- # '[' -z 59615 ']' 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@958 -- # kill -0 59615 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@959 -- # uname 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59615 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.988 killing process with pid 59615 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59615' 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@973 -- # kill 59615 00:05:42.988 15:13:29 json_config -- common/autotest_common.sh@978 -- # wait 59615 00:05:43.247 15:13:29 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:43.247 15:13:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:43.247 15:13:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.247 15:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.247 15:13:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:43.247 15:13:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:43.247 INFO: Success 00:05:43.247 00:05:43.247 real 0m8.760s 00:05:43.247 user 0m11.785s 00:05:43.247 sys 0m2.568s 00:05:43.247 15:13:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.247 15:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.248 ************************************ 00:05:43.248 END TEST json_config 00:05:43.248 ************************************ 00:05:43.248 15:13:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:43.248 15:13:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.248 15:13:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.248 15:13:29 -- common/autotest_common.sh@10 -- # set +x 00:05:43.508 ************************************ 00:05:43.508 START TEST json_config_extra_key 00:05:43.508 ************************************ 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.508 --rc genhtml_branch_coverage=1 00:05:43.508 --rc genhtml_function_coverage=1 00:05:43.508 --rc genhtml_legend=1 00:05:43.508 --rc geninfo_all_blocks=1 00:05:43.508 --rc geninfo_unexecuted_blocks=1 00:05:43.508 00:05:43.508 ' 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.508 --rc genhtml_branch_coverage=1 00:05:43.508 --rc genhtml_function_coverage=1 00:05:43.508 --rc genhtml_legend=1 00:05:43.508 --rc geninfo_all_blocks=1 00:05:43.508 --rc geninfo_unexecuted_blocks=1 00:05:43.508 00:05:43.508 ' 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.508 --rc genhtml_branch_coverage=1 00:05:43.508 --rc genhtml_function_coverage=1 00:05:43.508 --rc genhtml_legend=1 00:05:43.508 --rc geninfo_all_blocks=1 00:05:43.508 --rc geninfo_unexecuted_blocks=1 00:05:43.508 00:05:43.508 ' 00:05:43.508 15:13:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.508 --rc genhtml_branch_coverage=1 00:05:43.508 --rc genhtml_function_coverage=1 00:05:43.508 --rc genhtml_legend=1 00:05:43.508 --rc geninfo_all_blocks=1 00:05:43.508 --rc geninfo_unexecuted_blocks=1 00:05:43.508 00:05:43.508 ' 00:05:43.508 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.508 15:13:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.508 15:13:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.508 15:13:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.508 15:13:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.509 15:13:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.509 15:13:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.509 15:13:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.509 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.509 15:13:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.509 INFO: launching applications... 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.509 15:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59800 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.509 Waiting for target to run... 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.509 15:13:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59800 /var/tmp/spdk_tgt.sock 00:05:43.509 15:13:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59800 ']' 00:05:43.509 15:13:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.509 15:13:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.509 15:13:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.509 15:13:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.509 15:13:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.768 [2024-11-20 15:13:30.011096] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:43.768 [2024-11-20 15:13:30.011218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59800 ] 00:05:44.345 [2024-11-20 15:13:30.557215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.345 [2024-11-20 15:13:30.620735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.618 15:13:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.618 15:13:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:44.618 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.618 INFO: shutting down applications... 00:05:44.618 15:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.618 15:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59800 ]] 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59800 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59800 00:05:44.618 15:13:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.187 15:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.187 15:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.187 15:13:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59800 00:05:45.187 15:13:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.757 15:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.757 15:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.757 15:13:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59800 00:05:45.757 15:13:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.757 15:13:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:45.757 15:13:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.757 SPDK target shutdown done 00:05:45.757 15:13:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.757 Success 00:05:45.757 15:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:45.757 00:05:45.757 real 0m2.243s 00:05:45.757 user 0m1.553s 00:05:45.757 sys 0m0.642s 00:05:45.757 15:13:31 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.757 15:13:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.757 ************************************ 00:05:45.757 END TEST json_config_extra_key 00:05:45.757 ************************************ 00:05:45.757 15:13:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.757 15:13:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.757 15:13:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.757 15:13:32 -- common/autotest_common.sh@10 -- # set +x 00:05:45.757 ************************************ 00:05:45.757 START TEST alias_rpc 00:05:45.757 ************************************ 00:05:45.757 15:13:32 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.757 * Looking for test storage... 00:05:45.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:45.757 15:13:32 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.757 15:13:32 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.757 15:13:32 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.017 15:13:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.017 --rc genhtml_branch_coverage=1 00:05:46.017 --rc genhtml_function_coverage=1 00:05:46.017 --rc genhtml_legend=1 00:05:46.017 --rc geninfo_all_blocks=1 00:05:46.017 --rc geninfo_unexecuted_blocks=1 00:05:46.017 00:05:46.017 ' 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.017 --rc genhtml_branch_coverage=1 00:05:46.017 --rc genhtml_function_coverage=1 00:05:46.017 --rc genhtml_legend=1 00:05:46.017 --rc geninfo_all_blocks=1 00:05:46.017 --rc geninfo_unexecuted_blocks=1 00:05:46.017 00:05:46.017 ' 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.017 --rc genhtml_branch_coverage=1 00:05:46.017 --rc genhtml_function_coverage=1 00:05:46.017 --rc genhtml_legend=1 00:05:46.017 --rc geninfo_all_blocks=1 00:05:46.017 --rc geninfo_unexecuted_blocks=1 00:05:46.017 00:05:46.017 ' 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.017 --rc genhtml_branch_coverage=1 00:05:46.017 --rc genhtml_function_coverage=1 00:05:46.017 --rc genhtml_legend=1 00:05:46.017 --rc geninfo_all_blocks=1 00:05:46.017 --rc geninfo_unexecuted_blocks=1 00:05:46.017 00:05:46.017 ' 00:05:46.017 15:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:46.017 15:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59902 00:05:46.017 15:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.017 15:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59902 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59902 ']' 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.017 15:13:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.017 [2024-11-20 15:13:32.331490] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:46.017 [2024-11-20 15:13:32.331596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:05:46.017 [2024-11-20 15:13:32.484408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.277 [2024-11-20 15:13:32.567440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.846 15:13:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.846 15:13:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.846 15:13:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:47.106 15:13:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59902 00:05:47.106 15:13:33 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59902 ']' 00:05:47.106 15:13:33 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59902 00:05:47.365 15:13:33 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.365 15:13:33 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.365 15:13:33 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59902 00:05:47.365 15:13:33 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.365 15:13:33 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.365 15:13:33 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59902' 00:05:47.365 killing process with pid 59902 00:05:47.365 15:13:33 alias_rpc -- common/autotest_common.sh@973 -- # kill 59902 00:05:47.365 15:13:33 alias_rpc -- common/autotest_common.sh@978 -- # wait 59902 00:05:47.935 00:05:47.935 real 0m2.119s 00:05:47.935 user 0m2.204s 00:05:47.935 sys 0m0.644s 00:05:47.935 15:13:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.935 ************************************ 00:05:47.935 END TEST alias_rpc 00:05:47.935 ************************************ 00:05:47.935 15:13:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.935 15:13:34 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:05:47.935 15:13:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.935 15:13:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.935 15:13:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.935 15:13:34 -- common/autotest_common.sh@10 -- # set +x 00:05:47.935 ************************************ 00:05:47.935 START TEST dpdk_mem_utility 00:05:47.935 ************************************ 00:05:47.935 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.935 * Looking for test storage... 00:05:47.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:47.935 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.935 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.935 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.195 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.195 15:13:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:48.195 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.195 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.195 --rc genhtml_branch_coverage=1 00:05:48.195 --rc genhtml_function_coverage=1 00:05:48.195 --rc genhtml_legend=1 00:05:48.195 --rc geninfo_all_blocks=1 00:05:48.195 --rc geninfo_unexecuted_blocks=1 00:05:48.195 00:05:48.195 ' 00:05:48.195 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.195 --rc genhtml_branch_coverage=1 00:05:48.195 --rc genhtml_function_coverage=1 00:05:48.195 --rc genhtml_legend=1 00:05:48.195 --rc geninfo_all_blocks=1 00:05:48.195 --rc geninfo_unexecuted_blocks=1 00:05:48.195 00:05:48.195 ' 00:05:48.195 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.195 --rc genhtml_branch_coverage=1 00:05:48.195 --rc genhtml_function_coverage=1 00:05:48.195 --rc genhtml_legend=1 00:05:48.195 --rc geninfo_all_blocks=1 00:05:48.195 --rc geninfo_unexecuted_blocks=1 00:05:48.195 00:05:48.195 ' 00:05:48.195 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.195 --rc genhtml_branch_coverage=1 00:05:48.195 --rc genhtml_function_coverage=1 00:05:48.195 --rc genhtml_legend=1 00:05:48.195 --rc geninfo_all_blocks=1 00:05:48.195 --rc geninfo_unexecuted_blocks=1 00:05:48.195 00:05:48.195 ' 00:05:48.195 15:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:48.195 15:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60002 00:05:48.195 15:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.195 15:13:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60002 00:05:48.195 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60002 ']' 00:05:48.195 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.196 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.196 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.196 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.196 15:13:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.196 [2024-11-20 15:13:34.503172] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:48.196 [2024-11-20 15:13:34.503276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60002 ] 00:05:48.196 [2024-11-20 15:13:34.657624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.456 [2024-11-20 15:13:34.741883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.025 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.025 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:49.025 15:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.025 15:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.025 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.025 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.025 { 00:05:49.025 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.025 } 00:05:49.025 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.025 15:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.287 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:49.287 1 heaps totaling size 818.000000 MiB 00:05:49.287 size: 818.000000 MiB heap id: 0 00:05:49.287 end heaps---------- 00:05:49.287 9 mempools totaling size 603.782043 MiB 00:05:49.287 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.287 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.287 size: 100.555481 MiB name: bdev_io_60002 00:05:49.287 size: 50.003479 MiB name: msgpool_60002 00:05:49.287 size: 36.509338 MiB name: fsdev_io_60002 00:05:49.287 size: 21.763794 MiB name: PDU_Pool 00:05:49.287 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.287 size: 4.133484 MiB name: evtpool_60002 00:05:49.287 size: 0.026123 MiB name: Session_Pool 00:05:49.287 end mempools------- 00:05:49.287 6 memzones totaling size 4.142822 MiB 00:05:49.287 size: 1.000366 MiB name: RG_ring_0_60002 00:05:49.287 size: 1.000366 MiB name: RG_ring_1_60002 00:05:49.287 size: 1.000366 MiB name: RG_ring_4_60002 00:05:49.287 size: 1.000366 MiB name: RG_ring_5_60002 00:05:49.287 size: 0.125366 MiB name: RG_ring_2_60002 00:05:49.287 size: 0.015991 MiB name: RG_ring_3_60002 00:05:49.287 end memzones------- 00:05:49.287 15:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:49.287 heap id: 0 total size: 818.000000 MiB number of busy elements: 228 number of free elements: 15 00:05:49.287 list of free elements. size: 10.818787 MiB 00:05:49.287 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:49.287 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:49.287 element at address: 0x200000400000 with size: 0.996155 MiB 00:05:49.287 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:49.287 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:49.287 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:49.287 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:49.287 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:49.287 element at address: 0x20001ae00000 with size: 0.572632 MiB 00:05:49.287 element at address: 0x200000c00000 with size: 0.490845 MiB 00:05:49.287 element at address: 0x20000a600000 with size: 0.489807 MiB 00:05:49.287 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:49.287 element at address: 0x200003e00000 with size: 0.481201 MiB 00:05:49.287 element at address: 0x200028200000 with size: 0.396851 MiB 00:05:49.287 element at address: 0x200000800000 with size: 0.353394 MiB 00:05:49.287 list of standard malloc elements. size: 199.252319 MiB 00:05:49.287 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:49.287 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:49.287 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:49.287 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:49.287 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:49.287 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:49.287 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:49.287 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:49.287 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:49.287 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000085a780 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000085a980 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:49.287 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:49.287 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:49.288 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200028265980 with size: 0.000183 MiB 00:05:49.288 element at address: 0x200028265a40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826c640 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:49.288 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:49.289 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:49.289 list of memzone associated elements. size: 607.928894 MiB 00:05:49.289 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:49.289 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:49.289 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:49.289 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:49.289 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:49.289 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60002_0 00:05:49.289 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:49.289 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60002_0 00:05:49.289 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:49.289 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60002_0 00:05:49.289 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:49.289 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:49.289 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:49.289 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:49.289 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:49.289 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60002_0 00:05:49.289 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:49.289 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60002 00:05:49.289 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:49.289 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60002 00:05:49.289 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:49.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:49.289 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:49.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:49.289 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:49.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:49.289 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:49.289 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:49.289 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:49.289 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60002 00:05:49.289 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:49.289 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60002 00:05:49.289 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:49.289 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60002 00:05:49.289 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:49.289 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60002 00:05:49.289 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:49.289 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60002 00:05:49.289 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:49.289 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60002 00:05:49.289 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:49.289 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:49.289 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:49.289 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:49.289 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:49.289 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:49.289 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:49.289 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60002 00:05:49.289 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:05:49.289 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60002 00:05:49.289 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:49.289 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:49.289 element at address: 0x200028265b00 with size: 0.023743 MiB 00:05:49.289 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:49.289 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:05:49.289 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60002 00:05:49.289 element at address: 0x20002826bc40 with size: 0.002441 MiB 00:05:49.289 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:49.289 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:49.289 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60002 00:05:49.289 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:49.289 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60002 00:05:49.289 element at address: 0x20000085a840 with size: 0.000305 MiB 00:05:49.289 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60002 00:05:49.289 element at address: 0x20002826c700 with size: 0.000305 MiB 00:05:49.289 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:49.289 15:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:49.290 15:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60002 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60002 ']' 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60002 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60002 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.290 killing process with pid 60002 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60002' 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60002 00:05:49.290 15:13:35 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60002 00:05:49.858 00:05:49.858 real 0m1.961s 00:05:49.858 user 0m1.919s 00:05:49.858 sys 0m0.635s 00:05:49.858 15:13:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.858 15:13:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.858 ************************************ 00:05:49.858 END TEST dpdk_mem_utility 00:05:49.858 ************************************ 00:05:49.858 15:13:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.858 15:13:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.858 15:13:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.858 15:13:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.858 ************************************ 00:05:49.858 START TEST event 00:05:49.858 ************************************ 00:05:49.858 15:13:36 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:50.118 * Looking for test storage... 00:05:50.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:50.118 15:13:36 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.118 15:13:36 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.118 15:13:36 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.118 15:13:36 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.118 15:13:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.118 15:13:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.118 15:13:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.118 15:13:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.118 15:13:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.118 15:13:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.118 15:13:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.118 15:13:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.118 15:13:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.118 15:13:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.118 15:13:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.118 15:13:36 event -- scripts/common.sh@344 -- # case "$op" in 00:05:50.118 15:13:36 event -- scripts/common.sh@345 -- # : 1 00:05:50.118 15:13:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.118 15:13:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.118 15:13:36 event -- scripts/common.sh@365 -- # decimal 1 00:05:50.118 15:13:36 event -- scripts/common.sh@353 -- # local d=1 00:05:50.118 15:13:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.118 15:13:36 event -- scripts/common.sh@355 -- # echo 1 00:05:50.118 15:13:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.118 15:13:36 event -- scripts/common.sh@366 -- # decimal 2 00:05:50.118 15:13:36 event -- scripts/common.sh@353 -- # local d=2 00:05:50.118 15:13:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.118 15:13:36 event -- scripts/common.sh@355 -- # echo 2 00:05:50.118 15:13:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.118 15:13:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.118 15:13:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.118 15:13:36 event -- scripts/common.sh@368 -- # return 0 00:05:50.118 15:13:36 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.118 15:13:36 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.119 --rc genhtml_branch_coverage=1 00:05:50.119 --rc genhtml_function_coverage=1 00:05:50.119 --rc genhtml_legend=1 00:05:50.119 --rc geninfo_all_blocks=1 00:05:50.119 --rc geninfo_unexecuted_blocks=1 00:05:50.119 00:05:50.119 ' 00:05:50.119 15:13:36 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.119 --rc genhtml_branch_coverage=1 00:05:50.119 --rc genhtml_function_coverage=1 00:05:50.119 --rc genhtml_legend=1 00:05:50.119 --rc geninfo_all_blocks=1 00:05:50.119 --rc geninfo_unexecuted_blocks=1 00:05:50.119 00:05:50.119 ' 00:05:50.119 15:13:36 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.119 --rc genhtml_branch_coverage=1 00:05:50.119 --rc genhtml_function_coverage=1 00:05:50.119 --rc genhtml_legend=1 00:05:50.119 --rc geninfo_all_blocks=1 00:05:50.119 --rc geninfo_unexecuted_blocks=1 00:05:50.119 00:05:50.119 ' 00:05:50.119 15:13:36 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.119 --rc genhtml_branch_coverage=1 00:05:50.119 --rc genhtml_function_coverage=1 00:05:50.119 --rc genhtml_legend=1 00:05:50.119 --rc geninfo_all_blocks=1 00:05:50.119 --rc geninfo_unexecuted_blocks=1 00:05:50.119 00:05:50.119 ' 00:05:50.119 15:13:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:50.119 15:13:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:50.119 15:13:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.119 15:13:36 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:50.119 15:13:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.119 15:13:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.119 ************************************ 00:05:50.119 START TEST event_perf 00:05:50.119 ************************************ 00:05:50.119 15:13:36 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.119 Running I/O for 1 seconds...[2024-11-20 15:13:36.528708] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:50.119 [2024-11-20 15:13:36.528964] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60105 ] 00:05:50.378 [2024-11-20 15:13:36.683088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.378 [2024-11-20 15:13:36.772502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.378 [2024-11-20 15:13:36.772634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.378 [2024-11-20 15:13:36.772800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.378 Running I/O for 1 seconds...[2024-11-20 15:13:36.772804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.761 00:05:51.761 lcore 0: 188392 00:05:51.761 lcore 1: 188391 00:05:51.761 lcore 2: 188389 00:05:51.761 lcore 3: 188390 00:05:51.761 done. 00:05:51.761 00:05:51.761 real 0m1.342s 00:05:51.761 user 0m4.142s 00:05:51.761 sys 0m0.070s 00:05:51.761 15:13:37 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.761 ************************************ 00:05:51.761 END TEST event_perf 00:05:51.761 ************************************ 00:05:51.761 15:13:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.761 15:13:37 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.761 15:13:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:51.761 15:13:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.761 15:13:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.761 ************************************ 00:05:51.761 START TEST event_reactor 00:05:51.761 ************************************ 00:05:51.761 15:13:37 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.761 [2024-11-20 15:13:37.948953] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:51.761 [2024-11-20 15:13:37.949077] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60138 ] 00:05:51.761 [2024-11-20 15:13:38.105799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.761 [2024-11-20 15:13:38.190509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.139 test_start 00:05:53.139 oneshot 00:05:53.139 tick 100 00:05:53.139 tick 100 00:05:53.139 tick 250 00:05:53.139 tick 100 00:05:53.139 tick 100 00:05:53.139 tick 100 00:05:53.139 tick 250 00:05:53.139 tick 500 00:05:53.139 tick 100 00:05:53.139 tick 100 00:05:53.139 tick 250 00:05:53.139 tick 100 00:05:53.139 tick 100 00:05:53.139 test_end 00:05:53.139 ************************************ 00:05:53.139 END TEST event_reactor 00:05:53.139 ************************************ 00:05:53.139 00:05:53.139 real 0m1.337s 00:05:53.139 user 0m1.163s 00:05:53.139 sys 0m0.066s 00:05:53.139 15:13:39 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.139 15:13:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:53.139 15:13:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.139 15:13:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:53.139 15:13:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.139 15:13:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.139 ************************************ 00:05:53.139 START TEST event_reactor_perf 00:05:53.139 ************************************ 00:05:53.139 15:13:39 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.139 [2024-11-20 15:13:39.347558] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:53.139 [2024-11-20 15:13:39.347715] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60168 ] 00:05:53.139 [2024-11-20 15:13:39.503367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.139 [2024-11-20 15:13:39.586702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.516 test_start 00:05:54.516 test_end 00:05:54.516 Performance: 475780 events per second 00:05:54.516 00:05:54.516 real 0m1.333s 00:05:54.516 user 0m1.165s 00:05:54.516 sys 0m0.061s 00:05:54.516 15:13:40 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.516 15:13:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.516 ************************************ 00:05:54.516 END TEST event_reactor_perf 00:05:54.516 ************************************ 00:05:54.516 15:13:40 event -- event/event.sh@49 -- # uname -s 00:05:54.516 15:13:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.516 15:13:40 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.516 15:13:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.516 15:13:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.516 15:13:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.516 ************************************ 00:05:54.516 START TEST event_scheduler 00:05:54.516 ************************************ 00:05:54.516 15:13:40 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.516 * Looking for test storage... 00:05:54.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:54.516 15:13:40 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.516 15:13:40 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.516 15:13:40 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.516 15:13:40 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.516 15:13:40 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:54.516 15:13:40 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.516 15:13:40 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.516 --rc genhtml_branch_coverage=1 00:05:54.516 --rc genhtml_function_coverage=1 00:05:54.516 --rc genhtml_legend=1 00:05:54.516 --rc geninfo_all_blocks=1 00:05:54.516 --rc geninfo_unexecuted_blocks=1 00:05:54.516 00:05:54.517 ' 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.517 --rc genhtml_branch_coverage=1 00:05:54.517 --rc genhtml_function_coverage=1 00:05:54.517 --rc genhtml_legend=1 00:05:54.517 --rc geninfo_all_blocks=1 00:05:54.517 --rc geninfo_unexecuted_blocks=1 00:05:54.517 00:05:54.517 ' 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.517 --rc genhtml_branch_coverage=1 00:05:54.517 --rc genhtml_function_coverage=1 00:05:54.517 --rc genhtml_legend=1 00:05:54.517 --rc geninfo_all_blocks=1 00:05:54.517 --rc geninfo_unexecuted_blocks=1 00:05:54.517 00:05:54.517 ' 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.517 --rc genhtml_branch_coverage=1 00:05:54.517 --rc genhtml_function_coverage=1 00:05:54.517 --rc genhtml_legend=1 00:05:54.517 --rc geninfo_all_blocks=1 00:05:54.517 --rc geninfo_unexecuted_blocks=1 00:05:54.517 00:05:54.517 ' 00:05:54.517 15:13:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.517 15:13:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60243 00:05:54.517 15:13:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.517 15:13:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.517 15:13:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60243 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60243 ']' 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.517 15:13:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.776 [2024-11-20 15:13:41.003500] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:05:54.776 [2024-11-20 15:13:41.003609] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60243 ] 00:05:54.776 [2024-11-20 15:13:41.158927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.776 [2024-11-20 15:13:41.249912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.776 [2024-11-20 15:13:41.250060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.776 [2024-11-20 15:13:41.250119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.776 [2024-11-20 15:13:41.250121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.715 15:13:41 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.715 15:13:41 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:55.715 15:13:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:55.715 15:13:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.715 15:13:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.715 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.715 POWER: Cannot set governor of lcore 0 to performance 00:05:55.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.715 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.715 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.715 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:55.715 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:55.715 POWER: Unable to set Power Management Environment for lcore 0 00:05:55.715 [2024-11-20 15:13:41.979459] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:55.715 [2024-11-20 15:13:41.979479] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:55.715 [2024-11-20 15:13:41.979489] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:55.715 [2024-11-20 15:13:41.979502] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:55.715 [2024-11-20 15:13:41.979510] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:55.715 [2024-11-20 15:13:41.979517] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:55.715 15:13:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.715 15:13:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:55.715 15:13:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.715 15:13:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.715 [2024-11-20 15:13:42.119678] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:55.715 15:13:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.715 15:13:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:55.715 15:13:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.715 15:13:42 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.715 15:13:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.715 ************************************ 00:05:55.715 START TEST scheduler_create_thread 00:05:55.715 ************************************ 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.715 2 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.715 3 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.715 4 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.715 5 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.715 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.974 6 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.974 7 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.974 8 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.974 9 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.974 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.233 10 00:05:56.233 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.233 15:13:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:56.233 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.233 15:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.610 15:13:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.610 15:13:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.610 15:13:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.610 15:13:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.610 15:13:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.587 15:13:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.587 15:13:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.587 15:13:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.587 15:13:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.155 15:13:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.155 15:13:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:59.155 15:13:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:59.155 15:13:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.155 15:13:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.090 ************************************ 00:06:00.090 END TEST scheduler_create_thread 00:06:00.090 ************************************ 00:06:00.090 15:13:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.090 00:06:00.090 real 0m4.211s 00:06:00.090 user 0m0.022s 00:06:00.090 sys 0m0.014s 00:06:00.090 15:13:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.090 15:13:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.090 15:13:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:00.090 15:13:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60243 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60243 ']' 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60243 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60243 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:00.090 killing process with pid 60243 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60243' 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60243 00:06:00.090 15:13:46 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60243 00:06:00.349 [2024-11-20 15:13:46.626766] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:00.607 00:06:00.607 real 0m6.237s 00:06:00.607 user 0m13.689s 00:06:00.607 sys 0m0.548s 00:06:00.607 15:13:46 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.607 15:13:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.607 ************************************ 00:06:00.607 END TEST event_scheduler 00:06:00.607 ************************************ 00:06:00.607 15:13:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:00.607 15:13:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:00.607 15:13:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.608 15:13:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.608 15:13:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.608 ************************************ 00:06:00.608 START TEST app_repeat 00:06:00.608 ************************************ 00:06:00.608 15:13:47 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60377 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.608 Process app_repeat pid: 60377 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60377' 00:06:00.608 spdk_app_start Round 0 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.608 15:13:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60377 /var/tmp/spdk-nbd.sock 00:06:00.608 15:13:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60377 ']' 00:06:00.608 15:13:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.608 15:13:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.608 15:13:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.608 15:13:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.608 15:13:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.866 [2024-11-20 15:13:47.090568] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:00.866 [2024-11-20 15:13:47.090689] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60377 ] 00:06:00.866 [2024-11-20 15:13:47.241657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.866 [2024-11-20 15:13:47.326475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.866 [2024-11-20 15:13:47.326477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.802 15:13:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.802 15:13:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.802 15:13:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.061 Malloc0 00:06:02.061 15:13:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.320 Malloc1 00:06:02.320 15:13:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.320 15:13:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.579 /dev/nbd0 00:06:02.579 15:13:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.579 15:13:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.579 1+0 records in 00:06:02.579 1+0 records out 00:06:02.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274005 s, 14.9 MB/s 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.579 15:13:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.579 15:13:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.579 15:13:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.579 15:13:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.837 /dev/nbd1 00:06:02.837 15:13:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.837 15:13:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.837 1+0 records in 00:06:02.837 1+0 records out 00:06:02.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352057 s, 11.6 MB/s 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.837 15:13:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.837 15:13:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.837 15:13:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.837 15:13:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.837 15:13:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.837 15:13:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.095 { 00:06:03.095 "bdev_name": "Malloc0", 00:06:03.095 "nbd_device": "/dev/nbd0" 00:06:03.095 }, 00:06:03.095 { 00:06:03.095 "bdev_name": "Malloc1", 00:06:03.095 "nbd_device": "/dev/nbd1" 00:06:03.095 } 00:06:03.095 ]' 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.095 { 00:06:03.095 "bdev_name": "Malloc0", 00:06:03.095 "nbd_device": "/dev/nbd0" 00:06:03.095 }, 00:06:03.095 { 00:06:03.095 "bdev_name": "Malloc1", 00:06:03.095 "nbd_device": "/dev/nbd1" 00:06:03.095 } 00:06:03.095 ]' 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.095 /dev/nbd1' 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.095 /dev/nbd1' 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.095 256+0 records in 00:06:03.095 256+0 records out 00:06:03.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013264 s, 79.1 MB/s 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.095 15:13:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.354 256+0 records in 00:06:03.354 256+0 records out 00:06:03.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269338 s, 38.9 MB/s 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.354 256+0 records in 00:06:03.354 256+0 records out 00:06:03.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256555 s, 40.9 MB/s 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.354 15:13:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.613 15:13:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.872 15:13:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.131 15:13:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.131 15:13:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.389 15:13:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.649 [2024-11-20 15:13:50.979978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.649 [2024-11-20 15:13:51.061018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.649 [2024-11-20 15:13:51.061018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.908 [2024-11-20 15:13:51.135436] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.908 [2024-11-20 15:13:51.135511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.446 spdk_app_start Round 1 00:06:07.446 15:13:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.446 15:13:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:07.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.446 15:13:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60377 /var/tmp/spdk-nbd.sock 00:06:07.446 15:13:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60377 ']' 00:06:07.446 15:13:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.446 15:13:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.446 15:13:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.446 15:13:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.446 15:13:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.705 15:13:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.705 15:13:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.705 15:13:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.964 Malloc0 00:06:07.964 15:13:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.223 Malloc1 00:06:08.223 15:13:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.223 15:13:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.483 /dev/nbd0 00:06:08.483 15:13:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.483 15:13:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.483 1+0 records in 00:06:08.483 1+0 records out 00:06:08.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270934 s, 15.1 MB/s 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.483 15:13:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:08.483 15:13:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.483 15:13:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.483 15:13:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.742 /dev/nbd1 00:06:08.742 15:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.742 15:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.742 1+0 records in 00:06:08.742 1+0 records out 00:06:08.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342514 s, 12.0 MB/s 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:08.742 15:13:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.743 15:13:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.743 15:13:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:08.743 15:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.743 15:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.743 15:13:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.743 15:13:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.743 15:13:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.002 { 00:06:09.002 "bdev_name": "Malloc0", 00:06:09.002 "nbd_device": "/dev/nbd0" 00:06:09.002 }, 00:06:09.002 { 00:06:09.002 "bdev_name": "Malloc1", 00:06:09.002 "nbd_device": "/dev/nbd1" 00:06:09.002 } 00:06:09.002 ]' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.002 { 00:06:09.002 "bdev_name": "Malloc0", 00:06:09.002 "nbd_device": "/dev/nbd0" 00:06:09.002 }, 00:06:09.002 { 00:06:09.002 "bdev_name": "Malloc1", 00:06:09.002 "nbd_device": "/dev/nbd1" 00:06:09.002 } 00:06:09.002 ]' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.002 /dev/nbd1' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.002 /dev/nbd1' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.002 256+0 records in 00:06:09.002 256+0 records out 00:06:09.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132481 s, 79.1 MB/s 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.002 256+0 records in 00:06:09.002 256+0 records out 00:06:09.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280375 s, 37.4 MB/s 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.002 256+0 records in 00:06:09.002 256+0 records out 00:06:09.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237841 s, 44.1 MB/s 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.002 15:13:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.261 15:13:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.520 15:13:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.521 15:13:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.521 15:13:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.779 15:13:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.779 15:13:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.038 15:13:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.297 [2024-11-20 15:13:56.725540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.556 [2024-11-20 15:13:56.794097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.556 [2024-11-20 15:13:56.794097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.556 [2024-11-20 15:13:56.869745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.556 [2024-11-20 15:13:56.869796] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.112 15:13:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.112 spdk_app_start Round 2 00:06:13.112 15:13:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.112 15:13:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60377 /var/tmp/spdk-nbd.sock 00:06:13.112 15:13:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60377 ']' 00:06:13.112 15:13:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.112 15:13:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.112 15:13:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.112 15:13:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.112 15:13:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.372 15:13:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.372 15:13:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:13.372 15:13:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.631 Malloc0 00:06:13.631 15:14:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.891 Malloc1 00:06:13.891 15:14:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.891 15:14:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.150 /dev/nbd0 00:06:14.150 15:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.150 15:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.150 15:14:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.150 1+0 records in 00:06:14.150 1+0 records out 00:06:14.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304651 s, 13.4 MB/s 00:06:14.151 15:14:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.151 15:14:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.151 15:14:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.151 15:14:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.151 15:14:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.151 15:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.410 15:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.410 15:14:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.410 /dev/nbd1 00:06:14.410 15:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.669 15:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.669 1+0 records in 00:06:14.669 1+0 records out 00:06:14.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460506 s, 8.9 MB/s 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.669 15:14:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.669 15:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.669 15:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.669 15:14:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.669 15:14:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.669 15:14:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.928 { 00:06:14.928 "bdev_name": "Malloc0", 00:06:14.928 "nbd_device": "/dev/nbd0" 00:06:14.928 }, 00:06:14.928 { 00:06:14.928 "bdev_name": "Malloc1", 00:06:14.928 "nbd_device": "/dev/nbd1" 00:06:14.928 } 00:06:14.928 ]' 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.928 { 00:06:14.928 "bdev_name": "Malloc0", 00:06:14.928 "nbd_device": "/dev/nbd0" 00:06:14.928 }, 00:06:14.928 { 00:06:14.928 "bdev_name": "Malloc1", 00:06:14.928 "nbd_device": "/dev/nbd1" 00:06:14.928 } 00:06:14.928 ]' 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.928 /dev/nbd1' 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.928 /dev/nbd1' 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.928 256+0 records in 00:06:14.928 256+0 records out 00:06:14.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012046 s, 87.0 MB/s 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.928 256+0 records in 00:06:14.928 256+0 records out 00:06:14.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220783 s, 47.5 MB/s 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.928 256+0 records in 00:06:14.928 256+0 records out 00:06:14.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300567 s, 34.9 MB/s 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.928 15:14:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.187 15:14:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.446 15:14:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.706 15:14:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.966 15:14:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.966 15:14:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.225 15:14:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.483 [2024-11-20 15:14:02.825699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.483 [2024-11-20 15:14:02.906929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.483 [2024-11-20 15:14:02.906930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.742 [2024-11-20 15:14:02.981750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.742 [2024-11-20 15:14:02.981825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.276 15:14:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60377 /var/tmp/spdk-nbd.sock 00:06:19.276 15:14:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60377 ']' 00:06:19.276 15:14:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.276 15:14:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.276 15:14:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.276 15:14:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.276 15:14:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:19.535 15:14:05 event.app_repeat -- event/event.sh@39 -- # killprocess 60377 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60377 ']' 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60377 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60377 00:06:19.535 killing process with pid 60377 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60377' 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60377 00:06:19.535 15:14:05 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60377 00:06:19.794 spdk_app_start is called in Round 0. 00:06:19.794 Shutdown signal received, stop current app iteration 00:06:19.794 Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 reinitialization... 00:06:19.794 spdk_app_start is called in Round 1. 00:06:19.794 Shutdown signal received, stop current app iteration 00:06:19.794 Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 reinitialization... 00:06:19.794 spdk_app_start is called in Round 2. 00:06:19.794 Shutdown signal received, stop current app iteration 00:06:19.794 Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 reinitialization... 00:06:19.794 spdk_app_start is called in Round 3. 00:06:19.794 Shutdown signal received, stop current app iteration 00:06:19.794 15:14:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:19.794 15:14:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:19.794 00:06:19.794 real 0m19.114s 00:06:19.794 user 0m41.719s 00:06:19.794 sys 0m4.084s 00:06:19.794 15:14:06 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.794 15:14:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.794 ************************************ 00:06:19.794 END TEST app_repeat 00:06:19.794 ************************************ 00:06:19.794 15:14:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:19.794 15:14:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:19.794 15:14:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.794 15:14:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.794 15:14:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.794 ************************************ 00:06:19.794 START TEST cpu_locks 00:06:19.794 ************************************ 00:06:19.794 15:14:06 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:20.054 * Looking for test storage... 00:06:20.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:20.054 15:14:06 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.054 15:14:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.054 15:14:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.054 15:14:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.054 15:14:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:20.054 15:14:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.054 15:14:06 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.054 --rc genhtml_branch_coverage=1 00:06:20.054 --rc genhtml_function_coverage=1 00:06:20.054 --rc genhtml_legend=1 00:06:20.054 --rc geninfo_all_blocks=1 00:06:20.054 --rc geninfo_unexecuted_blocks=1 00:06:20.054 00:06:20.054 ' 00:06:20.054 15:14:06 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.054 --rc genhtml_branch_coverage=1 00:06:20.054 --rc genhtml_function_coverage=1 00:06:20.054 --rc genhtml_legend=1 00:06:20.054 --rc geninfo_all_blocks=1 00:06:20.054 --rc geninfo_unexecuted_blocks=1 00:06:20.054 00:06:20.054 ' 00:06:20.054 15:14:06 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.055 --rc genhtml_branch_coverage=1 00:06:20.055 --rc genhtml_function_coverage=1 00:06:20.055 --rc genhtml_legend=1 00:06:20.055 --rc geninfo_all_blocks=1 00:06:20.055 --rc geninfo_unexecuted_blocks=1 00:06:20.055 00:06:20.055 ' 00:06:20.055 15:14:06 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.055 --rc genhtml_branch_coverage=1 00:06:20.055 --rc genhtml_function_coverage=1 00:06:20.055 --rc genhtml_legend=1 00:06:20.055 --rc geninfo_all_blocks=1 00:06:20.055 --rc geninfo_unexecuted_blocks=1 00:06:20.055 00:06:20.055 ' 00:06:20.055 15:14:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:20.055 15:14:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:20.055 15:14:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:20.055 15:14:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:20.055 15:14:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.055 15:14:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.055 15:14:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.055 ************************************ 00:06:20.055 START TEST default_locks 00:06:20.055 ************************************ 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61011 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61011 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61011 ']' 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.055 15:14:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.314 [2024-11-20 15:14:06.564176] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:20.314 [2024-11-20 15:14:06.564292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61011 ] 00:06:20.314 [2024-11-20 15:14:06.719825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.573 [2024-11-20 15:14:06.802439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.155 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.155 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:21.155 15:14:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61011 00:06:21.155 15:14:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61011 00:06:21.155 15:14:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61011 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61011 ']' 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61011 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61011 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.721 killing process with pid 61011 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61011' 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61011 00:06:21.721 15:14:07 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61011 00:06:22.289 15:14:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61011 00:06:22.289 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:22.289 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61011 00:06:22.289 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:22.289 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.289 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61011 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61011 ']' 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.290 ERROR: process (pid: 61011) is no longer running 00:06:22.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61011) - No such process 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.290 00:06:22.290 real 0m2.036s 00:06:22.290 user 0m2.027s 00:06:22.290 sys 0m0.754s 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.290 ************************************ 00:06:22.290 END TEST default_locks 00:06:22.290 ************************************ 00:06:22.290 15:14:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.290 15:14:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:22.290 15:14:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.290 15:14:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.290 15:14:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.290 ************************************ 00:06:22.290 START TEST default_locks_via_rpc 00:06:22.290 ************************************ 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61075 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61075 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61075 ']' 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.290 15:14:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.290 [2024-11-20 15:14:08.676587] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:22.290 [2024-11-20 15:14:08.676715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:06:22.549 [2024-11-20 15:14:08.828932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.549 [2024-11-20 15:14:08.911558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.487 15:14:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61075 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.488 15:14:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61075 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61075 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61075 ']' 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61075 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61075 00:06:23.747 killing process with pid 61075 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61075' 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61075 00:06:23.747 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61075 00:06:24.315 00:06:24.315 real 0m2.073s 00:06:24.315 user 0m2.078s 00:06:24.315 sys 0m0.751s 00:06:24.315 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.315 15:14:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.315 ************************************ 00:06:24.315 END TEST default_locks_via_rpc 00:06:24.315 ************************************ 00:06:24.315 15:14:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:24.315 15:14:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.315 15:14:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.315 15:14:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.315 ************************************ 00:06:24.315 START TEST non_locking_app_on_locked_coremask 00:06:24.315 ************************************ 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61144 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61144 /var/tmp/spdk.sock 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61144 ']' 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.315 15:14:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.574 [2024-11-20 15:14:10.829422] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:24.574 [2024-11-20 15:14:10.829828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61144 ] 00:06:24.574 [2024-11-20 15:14:10.968566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.574 [2024-11-20 15:14:11.053118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61172 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61172 /var/tmp/spdk2.sock 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61172 ']' 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.512 15:14:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.512 [2024-11-20 15:14:11.871853] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:25.512 [2024-11-20 15:14:11.871994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61172 ] 00:06:25.772 [2024-11-20 15:14:12.022186] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.772 [2024-11-20 15:14:12.022279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.772 [2024-11-20 15:14:12.193718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.709 15:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.709 15:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.709 15:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61144 00:06:26.709 15:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61144 00:06:26.709 15:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.809 15:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61144 00:06:27.809 15:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61144 ']' 00:06:27.809 15:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61144 00:06:27.809 15:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.809 15:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.809 15:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61144 00:06:27.809 killing process with pid 61144 00:06:27.810 15:14:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.810 15:14:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.810 15:14:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61144' 00:06:27.810 15:14:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61144 00:06:27.810 15:14:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61144 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61172 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61172 ']' 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61172 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61172 00:06:28.746 killing process with pid 61172 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61172' 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61172 00:06:28.746 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61172 00:06:29.314 00:06:29.314 real 0m4.913s 00:06:29.314 user 0m5.165s 00:06:29.314 sys 0m1.573s 00:06:29.314 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.314 15:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.314 ************************************ 00:06:29.314 END TEST non_locking_app_on_locked_coremask 00:06:29.314 ************************************ 00:06:29.314 15:14:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:29.314 15:14:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.314 15:14:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.314 15:14:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.314 ************************************ 00:06:29.314 START TEST locking_app_on_unlocked_coremask 00:06:29.314 ************************************ 00:06:29.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61262 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61262 /var/tmp/spdk.sock 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61262 ']' 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.314 15:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:29.573 [2024-11-20 15:14:15.814905] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:29.574 [2024-11-20 15:14:15.815250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61262 ] 00:06:29.574 [2024-11-20 15:14:15.969906] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.574 [2024-11-20 15:14:15.969981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.832 [2024-11-20 15:14:16.055851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61290 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61290 /var/tmp/spdk2.sock 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61290 ']' 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.401 15:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.659 [2024-11-20 15:14:16.887045] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:30.659 [2024-11-20 15:14:16.887198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61290 ] 00:06:30.659 [2024-11-20 15:14:17.048796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.917 [2024-11-20 15:14:17.222935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.485 15:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.485 15:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.485 15:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61290 00:06:31.485 15:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61290 00:06:31.485 15:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.421 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61262 00:06:32.421 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61262 ']' 00:06:32.421 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61262 00:06:32.421 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.422 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.422 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61262 00:06:32.422 killing process with pid 61262 00:06:32.422 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.422 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.422 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61262' 00:06:32.422 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61262 00:06:32.422 15:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61262 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61290 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61290 ']' 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61290 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61290 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61290' 00:06:33.803 killing process with pid 61290 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61290 00:06:33.803 15:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61290 00:06:34.095 ************************************ 00:06:34.095 END TEST locking_app_on_unlocked_coremask 00:06:34.095 ************************************ 00:06:34.095 00:06:34.095 real 0m4.776s 00:06:34.095 user 0m5.070s 00:06:34.095 sys 0m1.455s 00:06:34.095 15:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.095 15:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.390 15:14:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.390 15:14:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.390 15:14:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.390 15:14:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.390 ************************************ 00:06:34.390 START TEST locking_app_on_locked_coremask 00:06:34.390 ************************************ 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61376 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61376 /var/tmp/spdk.sock 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61376 ']' 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.390 15:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.390 [2024-11-20 15:14:20.663591] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:34.390 [2024-11-20 15:14:20.663731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61376 ] 00:06:34.390 [2024-11-20 15:14:20.798583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.649 [2024-11-20 15:14:20.881956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61408 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61408 /var/tmp/spdk2.sock 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61408 /var/tmp/spdk2.sock 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61408 /var/tmp/spdk2.sock 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61408 ']' 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.588 15:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.588 [2024-11-20 15:14:21.775149] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:35.588 [2024-11-20 15:14:21.775246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61408 ] 00:06:35.588 [2024-11-20 15:14:21.929803] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61376 has claimed it. 00:06:35.588 [2024-11-20 15:14:21.929896] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.156 ERROR: process (pid: 61408) is no longer running 00:06:36.156 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61408) - No such process 00:06:36.156 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.157 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:36.157 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:36.157 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.157 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.157 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.157 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61376 00:06:36.157 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61376 00:06:36.157 15:14:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61376 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61376 ']' 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61376 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61376 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.726 killing process with pid 61376 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61376' 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61376 00:06:36.726 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61376 00:06:37.293 00:06:37.293 real 0m3.134s 00:06:37.293 user 0m3.573s 00:06:37.293 sys 0m0.904s 00:06:37.293 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.293 15:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.293 ************************************ 00:06:37.293 END TEST locking_app_on_locked_coremask 00:06:37.293 ************************************ 00:06:37.553 15:14:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.553 15:14:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.553 15:14:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.553 15:14:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.553 ************************************ 00:06:37.553 START TEST locking_overlapped_coremask 00:06:37.553 ************************************ 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61462 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61462 /var/tmp/spdk.sock 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61462 ']' 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.553 15:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.553 [2024-11-20 15:14:23.869284] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:37.553 [2024-11-20 15:14:23.869375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61462 ] 00:06:37.553 [2024-11-20 15:14:24.005542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.812 [2024-11-20 15:14:24.087618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.812 [2024-11-20 15:14:24.087524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.812 [2024-11-20 15:14:24.087620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61492 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61492 /var/tmp/spdk2.sock 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61492 /var/tmp/spdk2.sock 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61492 /var/tmp/spdk2.sock 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61492 ']' 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.754 15:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 [2024-11-20 15:14:24.927207] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:38.754 [2024-11-20 15:14:24.927301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61492 ] 00:06:38.754 [2024-11-20 15:14:25.079976] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61462 has claimed it. 00:06:38.754 [2024-11-20 15:14:25.082042] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.334 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61492) - No such process 00:06:39.334 ERROR: process (pid: 61492) is no longer running 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61462 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61462 ']' 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61462 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61462 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61462' 00:06:39.334 killing process with pid 61462 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61462 00:06:39.334 15:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61462 00:06:39.899 00:06:39.899 real 0m2.410s 00:06:39.899 user 0m6.630s 00:06:39.899 sys 0m0.595s 00:06:39.899 15:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.899 15:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.899 ************************************ 00:06:39.899 END TEST locking_overlapped_coremask 00:06:39.900 ************************************ 00:06:39.900 15:14:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.900 15:14:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.900 15:14:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.900 15:14:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.900 ************************************ 00:06:39.900 START TEST locking_overlapped_coremask_via_rpc 00:06:39.900 ************************************ 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61543 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61543 /var/tmp/spdk.sock 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61543 ']' 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.900 15:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.900 [2024-11-20 15:14:26.340852] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:39.900 [2024-11-20 15:14:26.340948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61543 ] 00:06:40.158 [2024-11-20 15:14:26.494503] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.158 [2024-11-20 15:14:26.494577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.158 [2024-11-20 15:14:26.580958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.158 [2024-11-20 15:14:26.581084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.158 [2024-11-20 15:14:26.581089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61573 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61573 /var/tmp/spdk2.sock 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61573 ']' 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.092 15:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 [2024-11-20 15:14:27.337732] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:41.092 [2024-11-20 15:14:27.337856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61573 ] 00:06:41.092 [2024-11-20 15:14:27.498061] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.092 [2024-11-20 15:14:27.498296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.350 [2024-11-20 15:14:27.614265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.350 [2024-11-20 15:14:27.616844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.351 [2024-11-20 15:14:27.616845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.918 [2024-11-20 15:14:28.302895] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61543 has claimed it. 00:06:41.918 2024/11/20 15:14:28 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:41.918 request: 00:06:41.918 { 00:06:41.918 "method": "framework_enable_cpumask_locks", 00:06:41.918 "params": {} 00:06:41.918 } 00:06:41.918 Got JSON-RPC error response 00:06:41.918 GoRPCClient: error on JSON-RPC call 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61543 /var/tmp/spdk.sock 00:06:41.918 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61543 ']' 00:06:41.919 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.919 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.919 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.919 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.919 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61573 /var/tmp/spdk2.sock 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61573 ']' 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.178 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.436 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.436 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.436 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.436 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.436 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.436 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.436 00:06:42.436 real 0m2.527s 00:06:42.436 user 0m1.193s 00:06:42.436 sys 0m0.276s 00:06:42.437 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.437 15:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.437 ************************************ 00:06:42.437 END TEST locking_overlapped_coremask_via_rpc 00:06:42.437 ************************************ 00:06:42.437 15:14:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:42.437 15:14:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61543 ]] 00:06:42.437 15:14:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61543 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61543 ']' 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61543 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61543 00:06:42.437 killing process with pid 61543 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61543' 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61543 00:06:42.437 15:14:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61543 00:06:43.003 15:14:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61573 ]] 00:06:43.003 15:14:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61573 00:06:43.003 15:14:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61573 ']' 00:06:43.003 15:14:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61573 00:06:43.003 15:14:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:43.003 15:14:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.003 15:14:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61573 00:06:43.262 15:14:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:43.262 15:14:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:43.262 killing process with pid 61573 00:06:43.262 15:14:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61573' 00:06:43.262 15:14:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61573 00:06:43.262 15:14:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61573 00:06:43.521 15:14:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.521 15:14:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.521 15:14:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61543 ]] 00:06:43.521 15:14:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61543 00:06:43.521 15:14:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61543 ']' 00:06:43.521 15:14:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61543 00:06:43.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61543) - No such process 00:06:43.521 Process with pid 61543 is not found 00:06:43.521 15:14:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61543 is not found' 00:06:43.521 15:14:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61573 ]] 00:06:43.521 15:14:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61573 00:06:43.521 15:14:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61573 ']' 00:06:43.521 15:14:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61573 00:06:43.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61573) - No such process 00:06:43.521 Process with pid 61573 is not found 00:06:43.521 15:14:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61573 is not found' 00:06:43.521 15:14:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.521 00:06:43.521 real 0m23.595s 00:06:43.521 user 0m38.754s 00:06:43.521 sys 0m7.447s 00:06:43.521 15:14:29 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.521 ************************************ 00:06:43.521 END TEST cpu_locks 00:06:43.521 ************************************ 00:06:43.521 15:14:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.521 00:06:43.521 real 0m53.635s 00:06:43.521 user 1m40.882s 00:06:43.521 sys 0m12.699s 00:06:43.521 15:14:29 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.521 ************************************ 00:06:43.521 END TEST event 00:06:43.521 15:14:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.521 ************************************ 00:06:43.521 15:14:29 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.521 15:14:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.521 15:14:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.521 15:14:29 -- common/autotest_common.sh@10 -- # set +x 00:06:43.521 ************************************ 00:06:43.521 START TEST thread 00:06:43.521 ************************************ 00:06:43.521 15:14:29 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.781 * Looking for test storage... 00:06:43.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.781 15:14:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.781 15:14:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.781 15:14:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.781 15:14:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.781 15:14:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.781 15:14:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.781 15:14:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.781 15:14:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.781 15:14:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.781 15:14:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.781 15:14:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.781 15:14:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:43.781 15:14:30 thread -- scripts/common.sh@345 -- # : 1 00:06:43.781 15:14:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.781 15:14:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.781 15:14:30 thread -- scripts/common.sh@365 -- # decimal 1 00:06:43.781 15:14:30 thread -- scripts/common.sh@353 -- # local d=1 00:06:43.781 15:14:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.781 15:14:30 thread -- scripts/common.sh@355 -- # echo 1 00:06:43.781 15:14:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.781 15:14:30 thread -- scripts/common.sh@366 -- # decimal 2 00:06:43.781 15:14:30 thread -- scripts/common.sh@353 -- # local d=2 00:06:43.781 15:14:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.781 15:14:30 thread -- scripts/common.sh@355 -- # echo 2 00:06:43.781 15:14:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.781 15:14:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.781 15:14:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.781 15:14:30 thread -- scripts/common.sh@368 -- # return 0 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.781 --rc genhtml_branch_coverage=1 00:06:43.781 --rc genhtml_function_coverage=1 00:06:43.781 --rc genhtml_legend=1 00:06:43.781 --rc geninfo_all_blocks=1 00:06:43.781 --rc geninfo_unexecuted_blocks=1 00:06:43.781 00:06:43.781 ' 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.781 --rc genhtml_branch_coverage=1 00:06:43.781 --rc genhtml_function_coverage=1 00:06:43.781 --rc genhtml_legend=1 00:06:43.781 --rc geninfo_all_blocks=1 00:06:43.781 --rc geninfo_unexecuted_blocks=1 00:06:43.781 00:06:43.781 ' 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.781 --rc genhtml_branch_coverage=1 00:06:43.781 --rc genhtml_function_coverage=1 00:06:43.781 --rc genhtml_legend=1 00:06:43.781 --rc geninfo_all_blocks=1 00:06:43.781 --rc geninfo_unexecuted_blocks=1 00:06:43.781 00:06:43.781 ' 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.781 --rc genhtml_branch_coverage=1 00:06:43.781 --rc genhtml_function_coverage=1 00:06:43.781 --rc genhtml_legend=1 00:06:43.781 --rc geninfo_all_blocks=1 00:06:43.781 --rc geninfo_unexecuted_blocks=1 00:06:43.781 00:06:43.781 ' 00:06:43.781 15:14:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.781 15:14:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.781 ************************************ 00:06:43.781 START TEST thread_poller_perf 00:06:43.781 ************************************ 00:06:43.781 15:14:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.781 [2024-11-20 15:14:30.257463] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:43.781 [2024-11-20 15:14:30.257569] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61733 ] 00:06:44.040 [2024-11-20 15:14:30.413321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.040 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:44.040 [2024-11-20 15:14:30.481719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.492 [2024-11-20T15:14:31.974Z] ====================================== 00:06:45.492 [2024-11-20T15:14:31.974Z] busy:2498006856 (cyc) 00:06:45.492 [2024-11-20T15:14:31.974Z] total_run_count: 372000 00:06:45.492 [2024-11-20T15:14:31.974Z] tsc_hz: 2490000000 (cyc) 00:06:45.492 [2024-11-20T15:14:31.974Z] ====================================== 00:06:45.492 [2024-11-20T15:14:31.974Z] poller_cost: 6715 (cyc), 2696 (nsec) 00:06:45.492 00:06:45.492 real 0m1.328s 00:06:45.492 user 0m1.159s 00:06:45.492 sys 0m0.062s 00:06:45.492 15:14:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.492 15:14:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.492 ************************************ 00:06:45.492 END TEST thread_poller_perf 00:06:45.492 ************************************ 00:06:45.492 15:14:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.492 15:14:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:45.492 15:14:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.492 15:14:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.492 ************************************ 00:06:45.492 START TEST thread_poller_perf 00:06:45.492 ************************************ 00:06:45.492 15:14:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.492 [2024-11-20 15:14:31.658480] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:45.492 [2024-11-20 15:14:31.658590] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61763 ] 00:06:45.492 [2024-11-20 15:14:31.812620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.492 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.492 [2024-11-20 15:14:31.901837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.871 [2024-11-20T15:14:33.353Z] ====================================== 00:06:46.871 [2024-11-20T15:14:33.353Z] busy:2492467542 (cyc) 00:06:46.871 [2024-11-20T15:14:33.353Z] total_run_count: 5088000 00:06:46.871 [2024-11-20T15:14:33.353Z] tsc_hz: 2490000000 (cyc) 00:06:46.871 [2024-11-20T15:14:33.353Z] ====================================== 00:06:46.871 [2024-11-20T15:14:33.353Z] poller_cost: 489 (cyc), 196 (nsec) 00:06:46.871 00:06:46.871 real 0m1.342s 00:06:46.871 user 0m1.171s 00:06:46.871 sys 0m0.064s 00:06:46.871 15:14:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.871 ************************************ 00:06:46.871 END TEST thread_poller_perf 00:06:46.871 ************************************ 00:06:46.871 15:14:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.871 15:14:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:46.871 ************************************ 00:06:46.871 END TEST thread 00:06:46.871 ************************************ 00:06:46.871 00:06:46.871 real 0m3.055s 00:06:46.871 user 0m2.500s 00:06:46.871 sys 0m0.352s 00:06:46.871 15:14:33 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.871 15:14:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.871 15:14:33 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:46.871 15:14:33 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:46.871 15:14:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.871 15:14:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.871 15:14:33 -- common/autotest_common.sh@10 -- # set +x 00:06:46.871 ************************************ 00:06:46.871 START TEST app_cmdline 00:06:46.871 ************************************ 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:46.871 * Looking for test storage... 00:06:46.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.871 15:14:33 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.871 --rc genhtml_branch_coverage=1 00:06:46.871 --rc genhtml_function_coverage=1 00:06:46.871 --rc genhtml_legend=1 00:06:46.871 --rc geninfo_all_blocks=1 00:06:46.871 --rc geninfo_unexecuted_blocks=1 00:06:46.871 00:06:46.871 ' 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.871 --rc genhtml_branch_coverage=1 00:06:46.871 --rc genhtml_function_coverage=1 00:06:46.871 --rc genhtml_legend=1 00:06:46.871 --rc geninfo_all_blocks=1 00:06:46.871 --rc geninfo_unexecuted_blocks=1 00:06:46.871 00:06:46.871 ' 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.871 --rc genhtml_branch_coverage=1 00:06:46.871 --rc genhtml_function_coverage=1 00:06:46.871 --rc genhtml_legend=1 00:06:46.871 --rc geninfo_all_blocks=1 00:06:46.871 --rc geninfo_unexecuted_blocks=1 00:06:46.871 00:06:46.871 ' 00:06:46.871 15:14:33 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.871 --rc genhtml_branch_coverage=1 00:06:46.871 --rc genhtml_function_coverage=1 00:06:46.872 --rc genhtml_legend=1 00:06:46.872 --rc geninfo_all_blocks=1 00:06:46.872 --rc geninfo_unexecuted_blocks=1 00:06:46.872 00:06:46.872 ' 00:06:46.872 15:14:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:46.872 15:14:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61852 00:06:46.872 15:14:33 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:46.872 15:14:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61852 00:06:46.872 15:14:33 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61852 ']' 00:06:46.872 15:14:33 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.872 15:14:33 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.872 15:14:33 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.131 15:14:33 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.131 15:14:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.131 [2024-11-20 15:14:33.412794] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:47.131 [2024-11-20 15:14:33.412901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61852 ] 00:06:47.131 [2024-11-20 15:14:33.554891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.391 [2024-11-20 15:14:33.643229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.958 15:14:34 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.958 15:14:34 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:47.958 15:14:34 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:48.218 { 00:06:48.218 "fields": { 00:06:48.218 "commit": "1981e6eec", 00:06:48.218 "major": 25, 00:06:48.218 "minor": 1, 00:06:48.218 "patch": 0, 00:06:48.218 "suffix": "-pre" 00:06:48.218 }, 00:06:48.218 "version": "SPDK v25.01-pre git sha1 1981e6eec" 00:06:48.218 } 00:06:48.218 15:14:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:48.218 15:14:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:48.218 15:14:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:48.218 15:14:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:48.218 15:14:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:48.218 15:14:34 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.218 15:14:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.218 15:14:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:48.218 15:14:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:48.218 15:14:34 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.476 15:14:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.476 15:14:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.476 15:14:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.476 15:14:34 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:48.476 15:14:34 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:48.477 15:14:34 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.477 2024/11/20 15:14:34 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:48.477 request: 00:06:48.477 { 00:06:48.477 "method": "env_dpdk_get_mem_stats", 00:06:48.477 "params": {} 00:06:48.477 } 00:06:48.477 Got JSON-RPC error response 00:06:48.477 GoRPCClient: error on JSON-RPC call 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.736 15:14:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61852 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61852 ']' 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61852 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.736 15:14:34 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61852 00:06:48.736 killing process with pid 61852 00:06:48.736 15:14:35 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.736 15:14:35 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.736 15:14:35 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61852' 00:06:48.736 15:14:35 app_cmdline -- common/autotest_common.sh@973 -- # kill 61852 00:06:48.736 15:14:35 app_cmdline -- common/autotest_common.sh@978 -- # wait 61852 00:06:49.304 00:06:49.304 real 0m2.462s 00:06:49.304 user 0m2.752s 00:06:49.304 sys 0m0.756s 00:06:49.304 15:14:35 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.304 ************************************ 00:06:49.304 END TEST app_cmdline 00:06:49.304 ************************************ 00:06:49.304 15:14:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.304 15:14:35 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.304 15:14:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.304 15:14:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.304 15:14:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.304 ************************************ 00:06:49.304 START TEST version 00:06:49.304 ************************************ 00:06:49.304 15:14:35 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.304 * Looking for test storage... 00:06:49.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.304 15:14:35 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.304 15:14:35 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.304 15:14:35 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.564 15:14:35 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.564 15:14:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.564 15:14:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.564 15:14:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.564 15:14:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.564 15:14:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.564 15:14:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.564 15:14:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.564 15:14:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.564 15:14:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.564 15:14:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.564 15:14:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.564 15:14:35 version -- scripts/common.sh@344 -- # case "$op" in 00:06:49.564 15:14:35 version -- scripts/common.sh@345 -- # : 1 00:06:49.564 15:14:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.564 15:14:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.564 15:14:35 version -- scripts/common.sh@365 -- # decimal 1 00:06:49.564 15:14:35 version -- scripts/common.sh@353 -- # local d=1 00:06:49.564 15:14:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.564 15:14:35 version -- scripts/common.sh@355 -- # echo 1 00:06:49.564 15:14:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.564 15:14:35 version -- scripts/common.sh@366 -- # decimal 2 00:06:49.564 15:14:35 version -- scripts/common.sh@353 -- # local d=2 00:06:49.564 15:14:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.564 15:14:35 version -- scripts/common.sh@355 -- # echo 2 00:06:49.564 15:14:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.564 15:14:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.564 15:14:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.564 15:14:35 version -- scripts/common.sh@368 -- # return 0 00:06:49.564 15:14:35 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.564 15:14:35 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.564 --rc genhtml_branch_coverage=1 00:06:49.564 --rc genhtml_function_coverage=1 00:06:49.564 --rc genhtml_legend=1 00:06:49.564 --rc geninfo_all_blocks=1 00:06:49.564 --rc geninfo_unexecuted_blocks=1 00:06:49.564 00:06:49.564 ' 00:06:49.564 15:14:35 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.564 --rc genhtml_branch_coverage=1 00:06:49.564 --rc genhtml_function_coverage=1 00:06:49.564 --rc genhtml_legend=1 00:06:49.564 --rc geninfo_all_blocks=1 00:06:49.564 --rc geninfo_unexecuted_blocks=1 00:06:49.564 00:06:49.564 ' 00:06:49.564 15:14:35 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.564 --rc genhtml_branch_coverage=1 00:06:49.564 --rc genhtml_function_coverage=1 00:06:49.564 --rc genhtml_legend=1 00:06:49.564 --rc geninfo_all_blocks=1 00:06:49.564 --rc geninfo_unexecuted_blocks=1 00:06:49.564 00:06:49.564 ' 00:06:49.564 15:14:35 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.564 --rc genhtml_branch_coverage=1 00:06:49.564 --rc genhtml_function_coverage=1 00:06:49.564 --rc genhtml_legend=1 00:06:49.564 --rc geninfo_all_blocks=1 00:06:49.564 --rc geninfo_unexecuted_blocks=1 00:06:49.564 00:06:49.564 ' 00:06:49.564 15:14:35 version -- app/version.sh@17 -- # get_header_version major 00:06:49.564 15:14:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.564 15:14:35 version -- app/version.sh@14 -- # cut -f2 00:06:49.564 15:14:35 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.564 15:14:35 version -- app/version.sh@17 -- # major=25 00:06:49.564 15:14:35 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.564 15:14:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.564 15:14:35 version -- app/version.sh@14 -- # cut -f2 00:06:49.564 15:14:35 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.564 15:14:35 version -- app/version.sh@18 -- # minor=1 00:06:49.564 15:14:35 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.564 15:14:35 version -- app/version.sh@14 -- # cut -f2 00:06:49.564 15:14:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.564 15:14:35 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.565 15:14:35 version -- app/version.sh@19 -- # patch=0 00:06:49.565 15:14:35 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.565 15:14:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.565 15:14:35 version -- app/version.sh@14 -- # cut -f2 00:06:49.565 15:14:35 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.565 15:14:35 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.565 15:14:35 version -- app/version.sh@22 -- # version=25.1 00:06:49.565 15:14:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.565 15:14:35 version -- app/version.sh@28 -- # version=25.1rc0 00:06:49.565 15:14:35 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.565 15:14:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.565 15:14:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:49.565 15:14:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:49.565 00:06:49.565 real 0m0.327s 00:06:49.565 user 0m0.181s 00:06:49.565 sys 0m0.206s 00:06:49.565 15:14:35 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.565 15:14:35 version -- common/autotest_common.sh@10 -- # set +x 00:06:49.565 ************************************ 00:06:49.565 END TEST version 00:06:49.565 ************************************ 00:06:49.565 15:14:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:49.565 15:14:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:49.565 15:14:36 -- spdk/autotest.sh@194 -- # uname -s 00:06:49.565 15:14:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:49.565 15:14:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.565 15:14:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.565 15:14:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:49.565 15:14:36 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:49.565 15:14:36 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:49.565 15:14:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:49.565 15:14:36 -- common/autotest_common.sh@10 -- # set +x 00:06:49.824 15:14:36 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:49.824 15:14:36 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:49.824 15:14:36 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:49.824 15:14:36 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:49.824 15:14:36 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:49.824 15:14:36 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:49.824 15:14:36 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.824 15:14:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.824 15:14:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.824 15:14:36 -- common/autotest_common.sh@10 -- # set +x 00:06:49.824 ************************************ 00:06:49.824 START TEST nvmf_tcp 00:06:49.824 ************************************ 00:06:49.824 15:14:36 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.824 * Looking for test storage... 00:06:49.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:49.824 15:14:36 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.824 15:14:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.824 15:14:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.083 15:14:36 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.083 --rc genhtml_branch_coverage=1 00:06:50.083 --rc genhtml_function_coverage=1 00:06:50.083 --rc genhtml_legend=1 00:06:50.083 --rc geninfo_all_blocks=1 00:06:50.083 --rc geninfo_unexecuted_blocks=1 00:06:50.083 00:06:50.083 ' 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.083 --rc genhtml_branch_coverage=1 00:06:50.083 --rc genhtml_function_coverage=1 00:06:50.083 --rc genhtml_legend=1 00:06:50.083 --rc geninfo_all_blocks=1 00:06:50.083 --rc geninfo_unexecuted_blocks=1 00:06:50.083 00:06:50.083 ' 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.083 --rc genhtml_branch_coverage=1 00:06:50.083 --rc genhtml_function_coverage=1 00:06:50.083 --rc genhtml_legend=1 00:06:50.083 --rc geninfo_all_blocks=1 00:06:50.083 --rc geninfo_unexecuted_blocks=1 00:06:50.083 00:06:50.083 ' 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.083 --rc genhtml_branch_coverage=1 00:06:50.083 --rc genhtml_function_coverage=1 00:06:50.083 --rc genhtml_legend=1 00:06:50.083 --rc geninfo_all_blocks=1 00:06:50.083 --rc geninfo_unexecuted_blocks=1 00:06:50.083 00:06:50.083 ' 00:06:50.083 15:14:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:50.083 15:14:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:50.083 15:14:36 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.083 15:14:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.083 ************************************ 00:06:50.083 START TEST nvmf_target_core 00:06:50.083 ************************************ 00:06:50.083 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:50.083 * Looking for test storage... 00:06:50.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:50.083 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.083 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.083 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.342 --rc genhtml_branch_coverage=1 00:06:50.342 --rc genhtml_function_coverage=1 00:06:50.342 --rc genhtml_legend=1 00:06:50.342 --rc geninfo_all_blocks=1 00:06:50.342 --rc geninfo_unexecuted_blocks=1 00:06:50.342 00:06:50.342 ' 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.342 --rc genhtml_branch_coverage=1 00:06:50.342 --rc genhtml_function_coverage=1 00:06:50.342 --rc genhtml_legend=1 00:06:50.342 --rc geninfo_all_blocks=1 00:06:50.342 --rc geninfo_unexecuted_blocks=1 00:06:50.342 00:06:50.342 ' 00:06:50.342 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.342 --rc genhtml_branch_coverage=1 00:06:50.343 --rc genhtml_function_coverage=1 00:06:50.343 --rc genhtml_legend=1 00:06:50.343 --rc geninfo_all_blocks=1 00:06:50.343 --rc geninfo_unexecuted_blocks=1 00:06:50.343 00:06:50.343 ' 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.343 --rc genhtml_branch_coverage=1 00:06:50.343 --rc genhtml_function_coverage=1 00:06:50.343 --rc genhtml_legend=1 00:06:50.343 --rc geninfo_all_blocks=1 00:06:50.343 --rc geninfo_unexecuted_blocks=1 00:06:50.343 00:06:50.343 ' 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.343 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.343 ************************************ 00:06:50.343 START TEST nvmf_abort 00:06:50.343 ************************************ 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:50.343 * Looking for test storage... 00:06:50.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.343 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.603 --rc genhtml_branch_coverage=1 00:06:50.603 --rc genhtml_function_coverage=1 00:06:50.603 --rc genhtml_legend=1 00:06:50.603 --rc geninfo_all_blocks=1 00:06:50.603 --rc geninfo_unexecuted_blocks=1 00:06:50.603 00:06:50.603 ' 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.603 --rc genhtml_branch_coverage=1 00:06:50.603 --rc genhtml_function_coverage=1 00:06:50.603 --rc genhtml_legend=1 00:06:50.603 --rc geninfo_all_blocks=1 00:06:50.603 --rc geninfo_unexecuted_blocks=1 00:06:50.603 00:06:50.603 ' 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.603 --rc genhtml_branch_coverage=1 00:06:50.603 --rc genhtml_function_coverage=1 00:06:50.603 --rc genhtml_legend=1 00:06:50.603 --rc geninfo_all_blocks=1 00:06:50.603 --rc geninfo_unexecuted_blocks=1 00:06:50.603 00:06:50.603 ' 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.603 --rc genhtml_branch_coverage=1 00:06:50.603 --rc genhtml_function_coverage=1 00:06:50.603 --rc genhtml_legend=1 00:06:50.603 --rc geninfo_all_blocks=1 00:06:50.603 --rc geninfo_unexecuted_blocks=1 00:06:50.603 00:06:50.603 ' 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:06:50.603 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.604 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:50.604 Cannot find device "nvmf_init_br" 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:06:50.604 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:50.604 Cannot find device "nvmf_init_br2" 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:50.604 Cannot find device "nvmf_tgt_br" 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:50.604 Cannot find device "nvmf_tgt_br2" 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:50.604 Cannot find device "nvmf_init_br" 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:50.604 Cannot find device "nvmf_init_br2" 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:06:50.604 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:50.864 Cannot find device "nvmf_tgt_br" 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:50.864 Cannot find device "nvmf_tgt_br2" 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:50.864 Cannot find device "nvmf_br" 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:50.864 Cannot find device "nvmf_init_if" 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:50.864 Cannot find device "nvmf_init_if2" 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:50.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:50.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:50.864 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:51.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:51.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.181 ms 00:06:51.122 00:06:51.122 --- 10.0.0.3 ping statistics --- 00:06:51.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.122 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:06:51.122 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:51.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:51.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:06:51.122 00:06:51.122 --- 10.0.0.4 ping statistics --- 00:06:51.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.122 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:51.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:06:51.123 00:06:51.123 --- 10.0.0.1 ping statistics --- 00:06:51.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.123 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:51.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:06:51.123 00:06:51.123 --- 10.0.0.2 ping statistics --- 00:06:51.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.123 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62298 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62298 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 62298 ']' 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:51.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.123 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.381 [2024-11-20 15:14:37.659100] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:51.381 [2024-11-20 15:14:37.659347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.381 [2024-11-20 15:14:37.830666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.640 [2024-11-20 15:14:37.916250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.640 [2024-11-20 15:14:37.916314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.640 [2024-11-20 15:14:37.916325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.640 [2024-11-20 15:14:37.916334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.640 [2024-11-20 15:14:37.916341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.640 [2024-11-20 15:14:37.917743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.640 [2024-11-20 15:14:37.917829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.640 [2024-11-20 15:14:37.917831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.208 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.208 [2024-11-20 15:14:38.685631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.466 Malloc0 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.466 Delay0 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.466 [2024-11-20 15:14:38.786834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.466 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:52.725 [2024-11-20 15:14:38.995062] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:54.679 Initializing NVMe Controllers 00:06:54.679 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:54.679 controller IO queue size 128 less than required 00:06:54.679 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:54.679 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:54.679 Initialization complete. Launching workers. 00:06:54.679 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34532 00:06:54.679 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34593, failed to submit 62 00:06:54.679 success 34536, unsuccessful 57, failed 0 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:54.679 rmmod nvme_tcp 00:06:54.679 rmmod nvme_fabrics 00:06:54.679 rmmod nvme_keyring 00:06:54.679 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62298 ']' 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62298 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 62298 ']' 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 62298 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62298 00:06:54.937 killing process with pid 62298 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62298' 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 62298 00:06:54.937 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 62298 00:06:55.196 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:55.196 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:55.196 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:55.196 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:55.196 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:55.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:06:55.455 00:06:55.455 real 0m5.206s 00:06:55.455 user 0m12.705s 00:06:55.455 sys 0m1.605s 00:06:55.455 ************************************ 00:06:55.455 END TEST nvmf_abort 00:06:55.455 ************************************ 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.455 15:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.456 15:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.456 ************************************ 00:06:55.456 START TEST nvmf_ns_hotplug_stress 00:06:55.456 ************************************ 00:06:55.456 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:55.714 * Looking for test storage... 00:06:55.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.714 --rc genhtml_branch_coverage=1 00:06:55.714 --rc genhtml_function_coverage=1 00:06:55.714 --rc genhtml_legend=1 00:06:55.714 --rc geninfo_all_blocks=1 00:06:55.714 --rc geninfo_unexecuted_blocks=1 00:06:55.714 00:06:55.714 ' 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.714 --rc genhtml_branch_coverage=1 00:06:55.714 --rc genhtml_function_coverage=1 00:06:55.714 --rc genhtml_legend=1 00:06:55.714 --rc geninfo_all_blocks=1 00:06:55.714 --rc geninfo_unexecuted_blocks=1 00:06:55.714 00:06:55.714 ' 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.714 --rc genhtml_branch_coverage=1 00:06:55.714 --rc genhtml_function_coverage=1 00:06:55.714 --rc genhtml_legend=1 00:06:55.714 --rc geninfo_all_blocks=1 00:06:55.714 --rc geninfo_unexecuted_blocks=1 00:06:55.714 00:06:55.714 ' 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.714 --rc genhtml_branch_coverage=1 00:06:55.714 --rc genhtml_function_coverage=1 00:06:55.714 --rc genhtml_legend=1 00:06:55.714 --rc geninfo_all_blocks=1 00:06:55.714 --rc geninfo_unexecuted_blocks=1 00:06:55.714 00:06:55.714 ' 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.714 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.715 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:55.975 Cannot find device "nvmf_init_br" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:55.975 Cannot find device "nvmf_init_br2" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:55.975 Cannot find device "nvmf_tgt_br" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:55.975 Cannot find device "nvmf_tgt_br2" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:55.975 Cannot find device "nvmf_init_br" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:55.975 Cannot find device "nvmf_init_br2" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:55.975 Cannot find device "nvmf_tgt_br" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:55.975 Cannot find device "nvmf_tgt_br2" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:55.975 Cannot find device "nvmf_br" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:55.975 Cannot find device "nvmf_init_if" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:55.975 Cannot find device "nvmf_init_if2" 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:55.975 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:56.235 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:56.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:56.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.135 ms 00:06:56.495 00:06:56.495 --- 10.0.0.3 ping statistics --- 00:06:56.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.495 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:56.495 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:56.495 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.131 ms 00:06:56.495 00:06:56.495 --- 10.0.0.4 ping statistics --- 00:06:56.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.495 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:56.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:06:56.495 00:06:56.495 --- 10.0.0.1 ping statistics --- 00:06:56.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.495 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:56.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:06:56.495 00:06:56.495 --- 10.0.0.2 ping statistics --- 00:06:56.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.495 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62615 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62615 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62615 ']' 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.495 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.496 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.496 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.496 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:56.496 [2024-11-20 15:14:42.945913] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:06:56.496 [2024-11-20 15:14:42.946546] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.755 [2024-11-20 15:14:43.104174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.755 [2024-11-20 15:14:43.189643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.755 [2024-11-20 15:14:43.189713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.755 [2024-11-20 15:14:43.189724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.755 [2024-11-20 15:14:43.189733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.755 [2024-11-20 15:14:43.189740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.755 [2024-11-20 15:14:43.191173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.755 [2024-11-20 15:14:43.191314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.755 [2024-11-20 15:14:43.191315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.691 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.691 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:57.691 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:57.691 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.691 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:57.691 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.691 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:57.691 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:57.949 [2024-11-20 15:14:44.218596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.949 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:58.207 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:58.466 [2024-11-20 15:14:44.786469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:58.466 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:58.767 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:59.050 Malloc0 00:06:59.050 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:59.309 Delay0 00:06:59.309 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.569 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:59.828 NULL1 00:06:59.828 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:00.087 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62756 00:07:00.087 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:00.087 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:00.087 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.346 Read completed with error (sct=0, sc=11) 00:07:00.346 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.605 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:00.605 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:00.864 true 00:07:00.864 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:00.864 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.432 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.691 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:01.691 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:01.950 true 00:07:01.950 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:01.950 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.210 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.469 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:02.469 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:02.729 true 00:07:02.729 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:02.729 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.664 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.922 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:03.922 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:03.922 true 00:07:04.181 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:04.181 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.439 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.698 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:04.698 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:04.956 true 00:07:04.956 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:04.956 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.214 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.472 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:05.472 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:05.472 true 00:07:05.730 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:05.730 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.665 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.923 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:06.923 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:07.181 true 00:07:07.181 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:07.181 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.439 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.697 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:07.698 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:07.956 true 00:07:07.956 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:07.956 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.213 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.472 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:08.472 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:08.731 true 00:07:08.731 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:08.731 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.991 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.249 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:09.249 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:09.539 true 00:07:09.539 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:09.539 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.473 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.731 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:10.731 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:10.989 true 00:07:10.989 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:10.989 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.248 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.506 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:11.506 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:11.763 true 00:07:11.763 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:11.763 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.697 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.959 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:12.959 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:13.218 true 00:07:13.218 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:13.218 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.477 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.736 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:13.736 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:13.993 true 00:07:13.993 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:13.993 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.251 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.510 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:14.510 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:14.768 true 00:07:14.768 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:14.768 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.704 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.963 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:15.963 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:16.221 true 00:07:16.221 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:16.221 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.480 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.737 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:16.737 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:16.995 true 00:07:16.995 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:16.995 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.253 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.512 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:17.512 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:17.771 true 00:07:17.771 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:17.771 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.707 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.966 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:18.966 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:18.966 true 00:07:19.224 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:19.224 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.484 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.484 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:19.484 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:19.745 true 00:07:19.745 15:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:19.745 15:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.745 15:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.745 15:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:20.745 15:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:21.004 true 00:07:21.004 15:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:21.004 15:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.571 15:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.830 15:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:21.830 15:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:22.087 true 00:07:22.087 15:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:22.087 15:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.652 15:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.910 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:22.910 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:23.168 true 00:07:23.168 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:23.168 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.542 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.802 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:24.802 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:25.085 true 00:07:25.085 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:25.085 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.033 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.033 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:26.034 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:26.293 true 00:07:26.293 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:26.293 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.552 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.811 15:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:26.811 15:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:27.070 true 00:07:27.070 15:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:27.070 15:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.007 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.007 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:28.007 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:28.265 true 00:07:28.265 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:28.265 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.522 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.780 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:28.780 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:29.038 true 00:07:29.038 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:29.038 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.004 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.270 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:30.270 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:30.270 Initializing NVMe Controllers 00:07:30.270 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.270 Controller IO queue size 128, less than required. 00:07:30.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.270 Controller IO queue size 128, less than required. 00:07:30.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.270 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:30.270 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:30.270 Initialization complete. Launching workers. 00:07:30.270 ======================================================== 00:07:30.270 Latency(us) 00:07:30.270 Device Information : IOPS MiB/s Average min max 00:07:30.270 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 524.08 0.26 107260.48 1928.62 1077030.80 00:07:30.270 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10850.97 5.30 11795.83 2741.24 494121.36 00:07:30.270 ======================================================== 00:07:30.270 Total : 11375.05 5.55 16194.16 1928.62 1077030.80 00:07:30.270 00:07:30.270 true 00:07:30.529 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62756 00:07:30.529 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62756) - No such process 00:07:30.529 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62756 00:07:30.529 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.788 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.788 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:30.788 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:30.788 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:30.788 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.788 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:31.047 null0 00:07:31.047 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.047 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.047 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:31.305 null1 00:07:31.305 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.305 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.305 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:31.563 null2 00:07:31.563 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.563 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.563 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:31.822 null3 00:07:31.822 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.822 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.822 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:32.081 null4 00:07:32.081 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.081 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.081 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:32.340 null5 00:07:32.340 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.340 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.340 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:32.340 null6 00:07:32.599 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.599 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.599 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:32.859 null7 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.859 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63783 63784 63786 63788 63790 63792 63794 63797 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.860 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.119 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.119 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.119 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.119 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.119 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.119 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.119 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.119 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.379 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.638 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.638 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.638 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.638 15:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.638 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.638 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.638 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.638 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.638 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.638 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.897 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.155 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.155 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.156 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.415 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.674 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.675 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.675 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.675 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.675 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.675 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.675 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.675 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.675 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.675 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.675 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.675 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.675 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.675 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.675 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.675 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.933 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.191 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.450 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.710 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.710 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.710 15:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.710 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.970 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.230 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.489 15:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.748 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.748 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.749 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.008 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.267 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.267 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.267 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.267 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.267 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.267 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.267 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.267 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.268 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.268 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.268 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.268 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.268 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.268 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.526 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.526 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.527 15:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.785 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.785 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.785 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.786 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.043 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.043 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.044 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.302 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.560 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.560 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.560 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.818 rmmod nvme_tcp 00:07:38.818 rmmod nvme_fabrics 00:07:38.818 rmmod nvme_keyring 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62615 ']' 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62615 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62615 ']' 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62615 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62615 00:07:38.818 killing process with pid 62615 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62615' 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62615 00:07:38.818 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62615 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:39.077 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:07:39.335 00:07:39.335 real 0m43.868s 00:07:39.335 user 3m25.763s 00:07:39.335 sys 0m17.833s 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.335 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.335 ************************************ 00:07:39.335 END TEST nvmf_ns_hotplug_stress 00:07:39.335 ************************************ 00:07:39.593 15:15:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:39.593 15:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.593 15:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.593 15:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.593 ************************************ 00:07:39.593 START TEST nvmf_delete_subsystem 00:07:39.593 ************************************ 00:07:39.593 15:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:39.593 * Looking for test storage... 00:07:39.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:39.593 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.593 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.593 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.851 --rc genhtml_branch_coverage=1 00:07:39.851 --rc genhtml_function_coverage=1 00:07:39.851 --rc genhtml_legend=1 00:07:39.851 --rc geninfo_all_blocks=1 00:07:39.851 --rc geninfo_unexecuted_blocks=1 00:07:39.851 00:07:39.851 ' 00:07:39.851 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.851 --rc genhtml_branch_coverage=1 00:07:39.851 --rc genhtml_function_coverage=1 00:07:39.851 --rc genhtml_legend=1 00:07:39.851 --rc geninfo_all_blocks=1 00:07:39.851 --rc geninfo_unexecuted_blocks=1 00:07:39.851 00:07:39.852 ' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.852 --rc genhtml_branch_coverage=1 00:07:39.852 --rc genhtml_function_coverage=1 00:07:39.852 --rc genhtml_legend=1 00:07:39.852 --rc geninfo_all_blocks=1 00:07:39.852 --rc geninfo_unexecuted_blocks=1 00:07:39.852 00:07:39.852 ' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.852 --rc genhtml_branch_coverage=1 00:07:39.852 --rc genhtml_function_coverage=1 00:07:39.852 --rc genhtml_legend=1 00:07:39.852 --rc geninfo_all_blocks=1 00:07:39.852 --rc geninfo_unexecuted_blocks=1 00:07:39.852 00:07:39.852 ' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:39.852 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:39.852 Cannot find device "nvmf_init_br" 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:39.852 Cannot find device "nvmf_init_br2" 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:39.852 Cannot find device "nvmf_tgt_br" 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:07:39.852 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:39.852 Cannot find device "nvmf_tgt_br2" 00:07:39.853 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:07:39.853 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:39.853 Cannot find device "nvmf_init_br" 00:07:39.853 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:07:39.853 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:39.853 Cannot find device "nvmf_init_br2" 00:07:39.853 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:07:39.853 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:39.853 Cannot find device "nvmf_tgt_br" 00:07:39.853 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:07:39.853 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:40.111 Cannot find device "nvmf_tgt_br2" 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:40.111 Cannot find device "nvmf_br" 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:40.111 Cannot find device "nvmf_init_if" 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:40.111 Cannot find device "nvmf_init_if2" 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:40.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:40.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:40.111 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:40.112 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:40.112 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:40.112 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:40.112 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:40.112 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:40.112 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:40.112 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:40.112 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:40.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:40.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.157 ms 00:07:40.420 00:07:40.420 --- 10.0.0.3 ping statistics --- 00:07:40.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.420 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:40.420 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:40.420 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:07:40.420 00:07:40.420 --- 10.0.0.4 ping statistics --- 00:07:40.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.420 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:40.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:07:40.420 00:07:40.420 --- 10.0.0.1 ping statistics --- 00:07:40.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.420 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:40.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:07:40.420 00:07:40.420 --- 10.0.0.2 ping statistics --- 00:07:40.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.420 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=65222 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 65222 00:07:40.420 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 65222 ']' 00:07:40.421 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.421 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.421 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.421 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.421 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.421 [2024-11-20 15:15:26.800259] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:40.421 [2024-11-20 15:15:26.800357] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.679 [2024-11-20 15:15:26.956333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.679 [2024-11-20 15:15:27.036242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.679 [2024-11-20 15:15:27.036305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.679 [2024-11-20 15:15:27.036316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.679 [2024-11-20 15:15:27.036325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.679 [2024-11-20 15:15:27.036332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.679 [2024-11-20 15:15:27.037742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.679 [2024-11-20 15:15:27.037925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 [2024-11-20 15:15:27.820835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 [2024-11-20 15:15:27.845059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 NULL1 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 Delay0 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65273 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:41.614 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:41.614 [2024-11-20 15:15:28.081452] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:43.514 15:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.514 15:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.514 15:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 starting I/O failed: -6 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 [2024-11-20 15:15:30.111290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c27ea0 is same with the state(6) to be set 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Write completed with error (sct=0, sc=8) 00:07:43.773 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 starting I/O failed: -6 00:07:43.774 [2024-11-20 15:15:30.113940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d08000c40 is same with the state(6) to be set 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Write completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:43.774 Read completed with error (sct=0, sc=8) 00:07:44.709 [2024-11-20 15:15:31.094458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c20ee0 is same with the state(6) to be set 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 [2024-11-20 15:15:31.108688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d0800d020 is same with the state(6) to be set 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 [2024-11-20 15:15:31.108908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d0800d800 is same with the state(6) to be set 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Read completed with error (sct=0, sc=8) 00:07:44.709 Write completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 [2024-11-20 15:15:31.110520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c257e0 is same with the state(6) to be set 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Write completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 Read completed with error (sct=0, sc=8) 00:07:44.710 [2024-11-20 15:15:31.112202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c24c30 is same with the state(6) to be set 00:07:44.710 Initializing NVMe Controllers 00:07:44.710 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.710 Controller IO queue size 128, less than required. 00:07:44.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.710 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:44.710 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:44.710 Initialization complete. Launching workers. 00:07:44.710 ======================================================== 00:07:44.710 Latency(us) 00:07:44.710 Device Information : IOPS MiB/s Average min max 00:07:44.710 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.54 0.08 893575.60 452.80 1008832.51 00:07:44.710 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.64 0.07 1008059.94 789.14 1999634.12 00:07:44.710 ======================================================== 00:07:44.710 Total : 323.18 0.16 947647.43 452.80 1999634.12 00:07:44.710 00:07:44.710 [2024-11-20 15:15:31.113139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c20ee0 (9): Bad file descriptor 00:07:44.710 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:44.710 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.710 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:44.710 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65273 00:07:44.710 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65273 00:07:45.277 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65273) - No such process 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65273 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 65273 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 65273 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.277 [2024-11-20 15:15:31.642379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65314 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65314 00:07:45.277 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.535 [2024-11-20 15:15:31.849163] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:45.800 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.800 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65314 00:07:45.800 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.367 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.367 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65314 00:07:46.367 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.934 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.934 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65314 00:07:46.934 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.501 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.501 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65314 00:07:47.501 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.759 15:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.759 15:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65314 00:07:47.759 15:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.326 15:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.326 15:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65314 00:07:48.326 15:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.584 Initializing NVMe Controllers 00:07:48.584 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:48.584 Controller IO queue size 128, less than required. 00:07:48.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:48.584 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:48.584 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:48.584 Initialization complete. Launching workers. 00:07:48.584 ======================================================== 00:07:48.584 Latency(us) 00:07:48.584 Device Information : IOPS MiB/s Average min max 00:07:48.584 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003398.70 1000233.44 1008730.48 00:07:48.584 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006026.85 1000482.96 1042509.35 00:07:48.584 ======================================================== 00:07:48.584 Total : 256.00 0.12 1004712.78 1000233.44 1042509.35 00:07:48.584 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65314 00:07:48.842 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65314) - No such process 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65314 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.842 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.842 rmmod nvme_tcp 00:07:48.842 rmmod nvme_fabrics 00:07:48.842 rmmod nvme_keyring 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 65222 ']' 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 65222 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 65222 ']' 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 65222 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65222 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.101 killing process with pid 65222 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65222' 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 65222 00:07:49.101 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 65222 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:49.360 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:07:49.619 00:07:49.619 real 0m10.023s 00:07:49.619 user 0m28.474s 00:07:49.619 sys 0m2.610s 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.619 ************************************ 00:07:49.619 END TEST nvmf_delete_subsystem 00:07:49.619 ************************************ 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.619 ************************************ 00:07:49.619 START TEST nvmf_host_management 00:07:49.619 ************************************ 00:07:49.619 15:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:49.619 * Looking for test storage... 00:07:49.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:49.619 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.619 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.619 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.880 --rc genhtml_branch_coverage=1 00:07:49.880 --rc genhtml_function_coverage=1 00:07:49.880 --rc genhtml_legend=1 00:07:49.880 --rc geninfo_all_blocks=1 00:07:49.880 --rc geninfo_unexecuted_blocks=1 00:07:49.880 00:07:49.880 ' 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.880 --rc genhtml_branch_coverage=1 00:07:49.880 --rc genhtml_function_coverage=1 00:07:49.880 --rc genhtml_legend=1 00:07:49.880 --rc geninfo_all_blocks=1 00:07:49.880 --rc geninfo_unexecuted_blocks=1 00:07:49.880 00:07:49.880 ' 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.880 --rc genhtml_branch_coverage=1 00:07:49.880 --rc genhtml_function_coverage=1 00:07:49.880 --rc genhtml_legend=1 00:07:49.880 --rc geninfo_all_blocks=1 00:07:49.880 --rc geninfo_unexecuted_blocks=1 00:07:49.880 00:07:49.880 ' 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.880 --rc genhtml_branch_coverage=1 00:07:49.880 --rc genhtml_function_coverage=1 00:07:49.880 --rc genhtml_legend=1 00:07:49.880 --rc geninfo_all_blocks=1 00:07:49.880 --rc geninfo_unexecuted_blocks=1 00:07:49.880 00:07:49.880 ' 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.880 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:49.881 Cannot find device "nvmf_init_br" 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:49.881 Cannot find device "nvmf_init_br2" 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:49.881 Cannot find device "nvmf_tgt_br" 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:49.881 Cannot find device "nvmf_tgt_br2" 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:49.881 Cannot find device "nvmf_init_br" 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:49.881 Cannot find device "nvmf_init_br2" 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:49.881 Cannot find device "nvmf_tgt_br" 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:49.881 Cannot find device "nvmf_tgt_br2" 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:49.881 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:50.141 Cannot find device "nvmf_br" 00:07:50.141 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:50.141 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:50.141 Cannot find device "nvmf_init_if" 00:07:50.141 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:50.142 Cannot find device "nvmf_init_if2" 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:50.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:50.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:50.142 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:50.402 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:50.402 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.140 ms 00:07:50.402 00:07:50.402 --- 10.0.0.3 ping statistics --- 00:07:50.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.402 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:50.402 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:50.402 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:07:50.402 00:07:50.402 --- 10.0.0.4 ping statistics --- 00:07:50.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.402 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:50.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:07:50.402 00:07:50.402 --- 10.0.0.1 ping statistics --- 00:07:50.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.402 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:50.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:07:50.402 00:07:50.402 --- 10.0.0.2 ping statistics --- 00:07:50.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.402 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65603 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65603 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65603 ']' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.402 15:15:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.402 [2024-11-20 15:15:36.828111] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:50.402 [2024-11-20 15:15:36.828192] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.662 [2024-11-20 15:15:36.983028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.662 [2024-11-20 15:15:37.071364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.662 [2024-11-20 15:15:37.071432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.662 [2024-11-20 15:15:37.071445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.662 [2024-11-20 15:15:37.071459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.662 [2024-11-20 15:15:37.071467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.662 [2024-11-20 15:15:37.072913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.663 [2024-11-20 15:15:37.073127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.663 [2024-11-20 15:15:37.073239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.663 [2024-11-20 15:15:37.073286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.600 [2024-11-20 15:15:37.821619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.600 Malloc0 00:07:51.600 [2024-11-20 15:15:37.928237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65681 00:07:51.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.600 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65681 /var/tmp/bdevperf.sock 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65681 ']' 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.600 { 00:07:51.600 "params": { 00:07:51.600 "name": "Nvme$subsystem", 00:07:51.600 "trtype": "$TEST_TRANSPORT", 00:07:51.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.600 "adrfam": "ipv4", 00:07:51.600 "trsvcid": "$NVMF_PORT", 00:07:51.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.600 "hdgst": ${hdgst:-false}, 00:07:51.600 "ddgst": ${ddgst:-false} 00:07:51.600 }, 00:07:51.600 "method": "bdev_nvme_attach_controller" 00:07:51.600 } 00:07:51.600 EOF 00:07:51.600 )") 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:51.600 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.600 "params": { 00:07:51.600 "name": "Nvme0", 00:07:51.600 "trtype": "tcp", 00:07:51.600 "traddr": "10.0.0.3", 00:07:51.600 "adrfam": "ipv4", 00:07:51.600 "trsvcid": "4420", 00:07:51.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.600 "hdgst": false, 00:07:51.600 "ddgst": false 00:07:51.600 }, 00:07:51.600 "method": "bdev_nvme_attach_controller" 00:07:51.600 }' 00:07:51.600 [2024-11-20 15:15:38.060942] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:51.600 [2024-11-20 15:15:38.061031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65681 ] 00:07:51.859 [2024-11-20 15:15:38.218336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.859 [2024-11-20 15:15:38.304321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.119 Running I/O for 10 seconds... 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=999 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 999 -ge 100 ']' 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.688 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 [2024-11-20 15:15:39.116082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.689 [2024-11-20 15:15:39.116965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.689 [2024-11-20 15:15:39.116974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.116983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.116993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.117451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.690 [2024-11-20 15:15:39.117460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.118544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:52.690 task offset: 9856 on job bdev=Nvme0n1 fails 00:07:52.690 00:07:52.690 Latency(us) 00:07:52.690 [2024-11-20T15:15:39.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.690 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.690 Job: Nvme0n1 ended in about 0.59 seconds with error 00:07:52.690 Verification LBA range: start 0x0 length 0x400 00:07:52.690 Nvme0n1 : 0.59 1842.72 115.17 108.40 0.00 32072.42 2118.73 30109.71 00:07:52.690 [2024-11-20T15:15:39.172Z] =================================================================================================================== 00:07:52.690 [2024-11-20T15:15:39.172Z] Total : 1842.72 115.17 108.40 0.00 32072.42 2118.73 30109.71 00:07:52.690 [2024-11-20 15:15:39.120737] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.690 [2024-11-20 15:15:39.120768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa63660 (9): Bad file descriptor 00:07:52.690 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.690 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.690 [2024-11-20 15:15:39.122911] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:52.690 [2024-11-20 15:15:39.122994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:52.690 [2024-11-20 15:15:39.123015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.690 [2024-11-20 15:15:39.123031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:52.690 [2024-11-20 15:15:39.123042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:52.690 [2024-11-20 15:15:39.123051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:52.690 [2024-11-20 15:15:39.123060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa63660 00:07:52.690 [2024-11-20 15:15:39.123085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa63660 (9): Bad file descriptor 00:07:52.690 [2024-11-20 15:15:39.123099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:52.690 [2024-11-20 15:15:39.123108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:52.690 [2024-11-20 15:15:39.123120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:52.690 [2024-11-20 15:15:39.123131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:52.690 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.690 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.690 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.690 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65681 00:07:54.067 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65681) - No such process 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:54.067 { 00:07:54.067 "params": { 00:07:54.067 "name": "Nvme$subsystem", 00:07:54.067 "trtype": "$TEST_TRANSPORT", 00:07:54.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:54.067 "adrfam": "ipv4", 00:07:54.067 "trsvcid": "$NVMF_PORT", 00:07:54.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:54.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:54.067 "hdgst": ${hdgst:-false}, 00:07:54.067 "ddgst": ${ddgst:-false} 00:07:54.067 }, 00:07:54.067 "method": "bdev_nvme_attach_controller" 00:07:54.067 } 00:07:54.067 EOF 00:07:54.067 )") 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:54.067 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:54.067 "params": { 00:07:54.067 "name": "Nvme0", 00:07:54.067 "trtype": "tcp", 00:07:54.067 "traddr": "10.0.0.3", 00:07:54.067 "adrfam": "ipv4", 00:07:54.067 "trsvcid": "4420", 00:07:54.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:54.067 "hdgst": false, 00:07:54.067 "ddgst": false 00:07:54.067 }, 00:07:54.067 "method": "bdev_nvme_attach_controller" 00:07:54.067 }' 00:07:54.067 [2024-11-20 15:15:40.207720] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:54.067 [2024-11-20 15:15:40.208291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65732 ] 00:07:54.067 [2024-11-20 15:15:40.361156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.067 [2024-11-20 15:15:40.444427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.326 Running I/O for 1 seconds... 00:07:55.274 2015.00 IOPS, 125.94 MiB/s 00:07:55.274 Latency(us) 00:07:55.274 [2024-11-20T15:15:41.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.274 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:55.274 Verification LBA range: start 0x0 length 0x400 00:07:55.274 Nvme0n1 : 1.03 2052.27 128.27 0.00 0.00 30691.85 5184.98 27583.02 00:07:55.274 [2024-11-20T15:15:41.756Z] =================================================================================================================== 00:07:55.274 [2024-11-20T15:15:41.756Z] Total : 2052.27 128.27 0.00 0.00 30691.85 5184.98 27583.02 00:07:55.533 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:55.533 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:55.533 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:55.533 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:55.533 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:55.533 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.533 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.792 rmmod nvme_tcp 00:07:55.792 rmmod nvme_fabrics 00:07:55.792 rmmod nvme_keyring 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65603 ']' 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65603 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65603 ']' 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65603 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65603 00:07:55.792 killing process with pid 65603 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65603' 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65603 00:07:55.792 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65603 00:07:56.079 [2024-11-20 15:15:42.405963] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:56.079 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:56.080 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:56.339 00:07:56.339 real 0m6.788s 00:07:56.339 user 0m24.163s 00:07:56.339 sys 0m2.074s 00:07:56.339 ************************************ 00:07:56.339 END TEST nvmf_host_management 00:07:56.339 ************************************ 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.339 15:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.599 ************************************ 00:07:56.599 START TEST nvmf_lvol 00:07:56.599 ************************************ 00:07:56.599 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.599 * Looking for test storage... 00:07:56.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:56.599 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.599 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.599 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.599 --rc genhtml_branch_coverage=1 00:07:56.599 --rc genhtml_function_coverage=1 00:07:56.599 --rc genhtml_legend=1 00:07:56.599 --rc geninfo_all_blocks=1 00:07:56.599 --rc geninfo_unexecuted_blocks=1 00:07:56.599 00:07:56.599 ' 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.599 --rc genhtml_branch_coverage=1 00:07:56.599 --rc genhtml_function_coverage=1 00:07:56.599 --rc genhtml_legend=1 00:07:56.599 --rc geninfo_all_blocks=1 00:07:56.599 --rc geninfo_unexecuted_blocks=1 00:07:56.599 00:07:56.599 ' 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.599 --rc genhtml_branch_coverage=1 00:07:56.599 --rc genhtml_function_coverage=1 00:07:56.599 --rc genhtml_legend=1 00:07:56.599 --rc geninfo_all_blocks=1 00:07:56.599 --rc geninfo_unexecuted_blocks=1 00:07:56.599 00:07:56.599 ' 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.599 --rc genhtml_branch_coverage=1 00:07:56.599 --rc genhtml_function_coverage=1 00:07:56.599 --rc genhtml_legend=1 00:07:56.599 --rc geninfo_all_blocks=1 00:07:56.599 --rc geninfo_unexecuted_blocks=1 00:07:56.599 00:07:56.599 ' 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.599 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.600 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.859 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.859 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.859 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.859 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.859 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.859 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.859 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.860 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:56.860 Cannot find device "nvmf_init_br" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:56.860 Cannot find device "nvmf_init_br2" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:56.860 Cannot find device "nvmf_tgt_br" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.860 Cannot find device "nvmf_tgt_br2" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:56.860 Cannot find device "nvmf_init_br" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:56.860 Cannot find device "nvmf_init_br2" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:56.860 Cannot find device "nvmf_tgt_br" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:56.860 Cannot find device "nvmf_tgt_br2" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:56.860 Cannot find device "nvmf_br" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:56.860 Cannot find device "nvmf_init_if" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:56.860 Cannot find device "nvmf_init_if2" 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:56.860 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:57.120 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:57.120 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:07:57.120 00:07:57.120 --- 10.0.0.3 ping statistics --- 00:07:57.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.120 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:57.120 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:57.120 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:07:57.120 00:07:57.120 --- 10.0.0.4 ping statistics --- 00:07:57.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.120 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:07:57.120 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:57.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:07:57.380 00:07:57.380 --- 10.0.0.1 ping statistics --- 00:07:57.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.380 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:57.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:07:57.380 00:07:57.380 --- 10.0.0.2 ping statistics --- 00:07:57.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.380 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=66001 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 66001 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 66001 ']' 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.380 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:57.380 [2024-11-20 15:15:43.724447] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:07:57.380 [2024-11-20 15:15:43.724551] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.638 [2024-11-20 15:15:43.876757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.638 [2024-11-20 15:15:43.960879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.638 [2024-11-20 15:15:43.960944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.638 [2024-11-20 15:15:43.960954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.638 [2024-11-20 15:15:43.960963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.638 [2024-11-20 15:15:43.960970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.638 [2024-11-20 15:15:43.962323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.638 [2024-11-20 15:15:43.962520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.638 [2024-11-20 15:15:43.962522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.207 15:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.207 15:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:58.207 15:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:58.207 15:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.207 15:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.466 15:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.466 15:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:58.466 [2024-11-20 15:15:44.940654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.724 15:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:58.992 15:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:58.992 15:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:59.279 15:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:59.279 15:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:59.279 15:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:59.846 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=37bf47a2-060b-4d7d-95e8-9dd0ffa736b1 00:07:59.846 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 37bf47a2-060b-4d7d-95e8-9dd0ffa736b1 lvol 20 00:07:59.846 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b4ead943-1d4a-4a77-8ed0-92769590b434 00:07:59.846 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:00.105 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b4ead943-1d4a-4a77-8ed0-92769590b434 00:08:00.363 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:00.623 [2024-11-20 15:15:46.981093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:00.623 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:00.881 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:00.881 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66149 00:08:00.881 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:01.818 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b4ead943-1d4a-4a77-8ed0-92769590b434 MY_SNAPSHOT 00:08:02.384 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8cdb9136-7ed6-437f-8c5f-ba972a01fce2 00:08:02.384 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b4ead943-1d4a-4a77-8ed0-92769590b434 30 00:08:02.643 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 8cdb9136-7ed6-437f-8c5f-ba972a01fce2 MY_CLONE 00:08:02.961 15:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cbaabd9f-2d07-4e0d-b936-421b631d82d2 00:08:02.961 15:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate cbaabd9f-2d07-4e0d-b936-421b631d82d2 00:08:03.896 15:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66149 00:08:12.009 Initializing NVMe Controllers 00:08:12.009 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:12.009 Controller IO queue size 128, less than required. 00:08:12.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.009 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:12.009 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:12.009 Initialization complete. Launching workers. 00:08:12.009 ======================================================== 00:08:12.009 Latency(us) 00:08:12.009 Device Information : IOPS MiB/s Average min max 00:08:12.009 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11046.80 43.15 11588.92 1878.49 70826.40 00:08:12.009 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11043.20 43.14 11590.51 567.25 68879.29 00:08:12.009 ======================================================== 00:08:12.009 Total : 22090.00 86.29 11589.72 567.25 70826.40 00:08:12.009 00:08:12.009 15:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:12.009 15:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b4ead943-1d4a-4a77-8ed0-92769590b434 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37bf47a2-060b-4d7d-95e8-9dd0ffa736b1 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.009 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.009 rmmod nvme_tcp 00:08:12.009 rmmod nvme_fabrics 00:08:12.009 rmmod nvme_keyring 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 66001 ']' 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 66001 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 66001 ']' 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 66001 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66001 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.268 killing process with pid 66001 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66001' 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 66001 00:08:12.268 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 66001 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:12.526 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:12.786 ************************************ 00:08:12.786 END TEST nvmf_lvol 00:08:12.786 ************************************ 00:08:12.786 00:08:12.786 real 0m16.412s 00:08:12.786 user 1m3.938s 00:08:12.786 sys 0m6.428s 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.786 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.046 ************************************ 00:08:13.046 START TEST nvmf_lvs_grow 00:08:13.046 ************************************ 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:13.046 * Looking for test storage... 00:08:13.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.046 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:13.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.308 --rc genhtml_branch_coverage=1 00:08:13.308 --rc genhtml_function_coverage=1 00:08:13.308 --rc genhtml_legend=1 00:08:13.308 --rc geninfo_all_blocks=1 00:08:13.308 --rc geninfo_unexecuted_blocks=1 00:08:13.308 00:08:13.308 ' 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:13.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.308 --rc genhtml_branch_coverage=1 00:08:13.308 --rc genhtml_function_coverage=1 00:08:13.308 --rc genhtml_legend=1 00:08:13.308 --rc geninfo_all_blocks=1 00:08:13.308 --rc geninfo_unexecuted_blocks=1 00:08:13.308 00:08:13.308 ' 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:13.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.308 --rc genhtml_branch_coverage=1 00:08:13.308 --rc genhtml_function_coverage=1 00:08:13.308 --rc genhtml_legend=1 00:08:13.308 --rc geninfo_all_blocks=1 00:08:13.308 --rc geninfo_unexecuted_blocks=1 00:08:13.308 00:08:13.308 ' 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:13.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.308 --rc genhtml_branch_coverage=1 00:08:13.308 --rc genhtml_function_coverage=1 00:08:13.308 --rc genhtml_legend=1 00:08:13.308 --rc geninfo_all_blocks=1 00:08:13.308 --rc geninfo_unexecuted_blocks=1 00:08:13.308 00:08:13.308 ' 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.308 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.309 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:13.309 Cannot find device "nvmf_init_br" 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:13.309 Cannot find device "nvmf_init_br2" 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:13.309 Cannot find device "nvmf_tgt_br" 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.309 Cannot find device "nvmf_tgt_br2" 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:13.309 Cannot find device "nvmf_init_br" 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:13.309 Cannot find device "nvmf_init_br2" 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:13.309 Cannot find device "nvmf_tgt_br" 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:13.309 Cannot find device "nvmf_tgt_br2" 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:13.309 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:13.570 Cannot find device "nvmf_br" 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:13.570 Cannot find device "nvmf_init_if" 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:13.570 Cannot find device "nvmf_init_if2" 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:13.570 15:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:13.570 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:13.570 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:13.570 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:13.570 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:13.570 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:13.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:13.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 00:08:13.830 00:08:13.830 --- 10.0.0.3 ping statistics --- 00:08:13.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.830 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:13.830 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:13.830 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:08:13.830 00:08:13.830 --- 10.0.0.4 ping statistics --- 00:08:13.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.830 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:13.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:08:13.830 00:08:13.830 --- 10.0.0.1 ping statistics --- 00:08:13.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.830 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:13.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:08:13.830 00:08:13.830 --- 10.0.0.2 ping statistics --- 00:08:13.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.830 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66574 00:08:13.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66574 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66574 ']' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.830 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:13.830 [2024-11-20 15:16:00.225398] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:13.830 [2024-11-20 15:16:00.225511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.088 [2024-11-20 15:16:00.371157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.089 [2024-11-20 15:16:00.458917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.089 [2024-11-20 15:16:00.458994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.089 [2024-11-20 15:16:00.459005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.089 [2024-11-20 15:16:00.459013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.089 [2024-11-20 15:16:00.459021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.089 [2024-11-20 15:16:00.459434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:15.024 [2024-11-20 15:16:01.443462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.024 ************************************ 00:08:15.024 START TEST lvs_grow_clean 00:08:15.024 ************************************ 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:15.024 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.591 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:15.591 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:15.591 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:15.591 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:15.591 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:15.849 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:15.849 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:15.849 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d lvol 150 00:08:16.108 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b7a7b864-c43c-4e72-a232-cd8800439492 00:08:16.108 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:16.108 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:16.366 [2024-11-20 15:16:02.749165] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:16.367 [2024-11-20 15:16:02.749539] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:16.367 true 00:08:16.367 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:16.367 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:16.626 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:16.626 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:16.885 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b7a7b864-c43c-4e72-a232-cd8800439492 00:08:17.145 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:17.404 [2024-11-20 15:16:03.756243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:17.404 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:17.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66740 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66740 /var/tmp/bdevperf.sock 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66740 ']' 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.663 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:17.663 [2024-11-20 15:16:04.050199] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:17.663 [2024-11-20 15:16:04.050553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66740 ] 00:08:17.922 [2024-11-20 15:16:04.202506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.922 [2024-11-20 15:16:04.287908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.859 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.859 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:18.859 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:18.859 Nvme0n1 00:08:18.859 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:19.117 [ 00:08:19.117 { 00:08:19.117 "aliases": [ 00:08:19.117 "b7a7b864-c43c-4e72-a232-cd8800439492" 00:08:19.117 ], 00:08:19.117 "assigned_rate_limits": { 00:08:19.117 "r_mbytes_per_sec": 0, 00:08:19.117 "rw_ios_per_sec": 0, 00:08:19.117 "rw_mbytes_per_sec": 0, 00:08:19.117 "w_mbytes_per_sec": 0 00:08:19.117 }, 00:08:19.117 "block_size": 4096, 00:08:19.117 "claimed": false, 00:08:19.117 "driver_specific": { 00:08:19.117 "mp_policy": "active_passive", 00:08:19.117 "nvme": [ 00:08:19.117 { 00:08:19.117 "ctrlr_data": { 00:08:19.117 "ana_reporting": false, 00:08:19.118 "cntlid": 1, 00:08:19.118 "firmware_revision": "25.01", 00:08:19.118 "model_number": "SPDK bdev Controller", 00:08:19.118 "multi_ctrlr": true, 00:08:19.118 "oacs": { 00:08:19.118 "firmware": 0, 00:08:19.118 "format": 0, 00:08:19.118 "ns_manage": 0, 00:08:19.118 "security": 0 00:08:19.118 }, 00:08:19.118 "serial_number": "SPDK0", 00:08:19.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.118 "vendor_id": "0x8086" 00:08:19.118 }, 00:08:19.118 "ns_data": { 00:08:19.118 "can_share": true, 00:08:19.118 "id": 1 00:08:19.118 }, 00:08:19.118 "trid": { 00:08:19.118 "adrfam": "IPv4", 00:08:19.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.118 "traddr": "10.0.0.3", 00:08:19.118 "trsvcid": "4420", 00:08:19.118 "trtype": "TCP" 00:08:19.118 }, 00:08:19.118 "vs": { 00:08:19.118 "nvme_version": "1.3" 00:08:19.118 } 00:08:19.118 } 00:08:19.118 ] 00:08:19.118 }, 00:08:19.118 "memory_domains": [ 00:08:19.118 { 00:08:19.118 "dma_device_id": "system", 00:08:19.118 "dma_device_type": 1 00:08:19.118 } 00:08:19.118 ], 00:08:19.118 "name": "Nvme0n1", 00:08:19.118 "num_blocks": 38912, 00:08:19.118 "numa_id": -1, 00:08:19.118 "product_name": "NVMe disk", 00:08:19.118 "supported_io_types": { 00:08:19.118 "abort": true, 00:08:19.118 "compare": true, 00:08:19.118 "compare_and_write": true, 00:08:19.118 "copy": true, 00:08:19.118 "flush": true, 00:08:19.118 "get_zone_info": false, 00:08:19.118 "nvme_admin": true, 00:08:19.118 "nvme_io": true, 00:08:19.118 "nvme_io_md": false, 00:08:19.118 "nvme_iov_md": false, 00:08:19.118 "read": true, 00:08:19.118 "reset": true, 00:08:19.118 "seek_data": false, 00:08:19.118 "seek_hole": false, 00:08:19.118 "unmap": true, 00:08:19.118 "write": true, 00:08:19.118 "write_zeroes": true, 00:08:19.118 "zcopy": false, 00:08:19.118 "zone_append": false, 00:08:19.118 "zone_management": false 00:08:19.118 }, 00:08:19.118 "uuid": "b7a7b864-c43c-4e72-a232-cd8800439492", 00:08:19.118 "zoned": false 00:08:19.118 } 00:08:19.118 ] 00:08:19.118 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66783 00:08:19.118 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:19.118 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:19.377 Running I/O for 10 seconds... 00:08:20.316 Latency(us) 00:08:20.316 [2024-11-20T15:16:06.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.316 Nvme0n1 : 1.00 9944.00 38.84 0.00 0.00 0.00 0.00 0.00 00:08:20.316 [2024-11-20T15:16:06.798Z] =================================================================================================================== 00:08:20.316 [2024-11-20T15:16:06.798Z] Total : 9944.00 38.84 0.00 0.00 0.00 0.00 0.00 00:08:20.316 00:08:21.252 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:21.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.252 Nvme0n1 : 2.00 10186.50 39.79 0.00 0.00 0.00 0.00 0.00 00:08:21.252 [2024-11-20T15:16:07.734Z] =================================================================================================================== 00:08:21.252 [2024-11-20T15:16:07.734Z] Total : 10186.50 39.79 0.00 0.00 0.00 0.00 0.00 00:08:21.252 00:08:21.511 true 00:08:21.511 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:21.511 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:21.770 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:21.770 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:21.770 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66783 00:08:22.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.337 Nvme0n1 : 3.00 10140.67 39.61 0.00 0.00 0.00 0.00 0.00 00:08:22.337 [2024-11-20T15:16:08.820Z] =================================================================================================================== 00:08:22.338 [2024-11-20T15:16:08.820Z] Total : 10140.67 39.61 0.00 0.00 0.00 0.00 0.00 00:08:22.338 00:08:23.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.272 Nvme0n1 : 4.00 9939.50 38.83 0.00 0.00 0.00 0.00 0.00 00:08:23.272 [2024-11-20T15:16:09.754Z] =================================================================================================================== 00:08:23.272 [2024-11-20T15:16:09.754Z] Total : 9939.50 38.83 0.00 0.00 0.00 0.00 0.00 00:08:23.272 00:08:24.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.207 Nvme0n1 : 5.00 9768.40 38.16 0.00 0.00 0.00 0.00 0.00 00:08:24.207 [2024-11-20T15:16:10.689Z] =================================================================================================================== 00:08:24.207 [2024-11-20T15:16:10.689Z] Total : 9768.40 38.16 0.00 0.00 0.00 0.00 0.00 00:08:24.207 00:08:25.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.585 Nvme0n1 : 6.00 9726.67 37.99 0.00 0.00 0.00 0.00 0.00 00:08:25.585 [2024-11-20T15:16:12.067Z] =================================================================================================================== 00:08:25.585 [2024-11-20T15:16:12.067Z] Total : 9726.67 37.99 0.00 0.00 0.00 0.00 0.00 00:08:25.585 00:08:26.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.522 Nvme0n1 : 7.00 9619.29 37.58 0.00 0.00 0.00 0.00 0.00 00:08:26.522 [2024-11-20T15:16:13.004Z] =================================================================================================================== 00:08:26.522 [2024-11-20T15:16:13.004Z] Total : 9619.29 37.58 0.00 0.00 0.00 0.00 0.00 00:08:26.522 00:08:27.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.459 Nvme0n1 : 8.00 9600.38 37.50 0.00 0.00 0.00 0.00 0.00 00:08:27.459 [2024-11-20T15:16:13.941Z] =================================================================================================================== 00:08:27.459 [2024-11-20T15:16:13.941Z] Total : 9600.38 37.50 0.00 0.00 0.00 0.00 0.00 00:08:27.459 00:08:28.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.396 Nvme0n1 : 9.00 9548.78 37.30 0.00 0.00 0.00 0.00 0.00 00:08:28.396 [2024-11-20T15:16:14.878Z] =================================================================================================================== 00:08:28.396 [2024-11-20T15:16:14.878Z] Total : 9548.78 37.30 0.00 0.00 0.00 0.00 0.00 00:08:28.396 00:08:29.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.332 Nvme0n1 : 10.00 9451.00 36.92 0.00 0.00 0.00 0.00 0.00 00:08:29.332 [2024-11-20T15:16:15.814Z] =================================================================================================================== 00:08:29.332 [2024-11-20T15:16:15.814Z] Total : 9451.00 36.92 0.00 0.00 0.00 0.00 0.00 00:08:29.332 00:08:29.332 00:08:29.332 Latency(us) 00:08:29.332 [2024-11-20T15:16:15.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.332 Nvme0n1 : 10.01 9455.46 36.94 0.00 0.00 13532.52 5685.05 77064.02 00:08:29.332 [2024-11-20T15:16:15.814Z] =================================================================================================================== 00:08:29.332 [2024-11-20T15:16:15.814Z] Total : 9455.46 36.94 0.00 0.00 13532.52 5685.05 77064.02 00:08:29.332 { 00:08:29.332 "results": [ 00:08:29.332 { 00:08:29.332 "job": "Nvme0n1", 00:08:29.332 "core_mask": "0x2", 00:08:29.332 "workload": "randwrite", 00:08:29.332 "status": "finished", 00:08:29.332 "queue_depth": 128, 00:08:29.332 "io_size": 4096, 00:08:29.332 "runtime": 10.008825, 00:08:29.332 "iops": 9455.455560467886, 00:08:29.332 "mibps": 36.93537328307768, 00:08:29.332 "io_failed": 0, 00:08:29.332 "io_timeout": 0, 00:08:29.332 "avg_latency_us": 13532.524595238454, 00:08:29.332 "min_latency_us": 5685.0506024096385, 00:08:29.332 "max_latency_us": 77064.01927710844 00:08:29.332 } 00:08:29.332 ], 00:08:29.332 "core_count": 1 00:08:29.332 } 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66740 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66740 ']' 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66740 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66740 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:29.332 killing process with pid 66740 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66740' 00:08:29.332 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.332 00:08:29.332 Latency(us) 00:08:29.332 [2024-11-20T15:16:15.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.332 [2024-11-20T15:16:15.814Z] =================================================================================================================== 00:08:29.332 [2024-11-20T15:16:15.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66740 00:08:29.332 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66740 00:08:29.591 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:29.852 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.111 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:30.111 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:30.369 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:30.369 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:30.369 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.628 [2024-11-20 15:16:17.063336] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:30.887 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:30.887 2024/11/20 15:16:17 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:30.887 request: 00:08:30.887 { 00:08:30.887 "method": "bdev_lvol_get_lvstores", 00:08:30.887 "params": { 00:08:30.887 "uuid": "dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d" 00:08:30.887 } 00:08:30.887 } 00:08:30.887 Got JSON-RPC error response 00:08:30.887 GoRPCClient: error on JSON-RPC call 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.146 aio_bdev 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b7a7b864-c43c-4e72-a232-cd8800439492 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b7a7b864-c43c-4e72-a232-cd8800439492 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.146 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.405 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7a7b864-c43c-4e72-a232-cd8800439492 -t 2000 00:08:31.664 [ 00:08:31.664 { 00:08:31.664 "aliases": [ 00:08:31.664 "lvs/lvol" 00:08:31.664 ], 00:08:31.664 "assigned_rate_limits": { 00:08:31.664 "r_mbytes_per_sec": 0, 00:08:31.664 "rw_ios_per_sec": 0, 00:08:31.664 "rw_mbytes_per_sec": 0, 00:08:31.664 "w_mbytes_per_sec": 0 00:08:31.664 }, 00:08:31.664 "block_size": 4096, 00:08:31.664 "claimed": false, 00:08:31.664 "driver_specific": { 00:08:31.664 "lvol": { 00:08:31.664 "base_bdev": "aio_bdev", 00:08:31.664 "clone": false, 00:08:31.664 "esnap_clone": false, 00:08:31.664 "lvol_store_uuid": "dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d", 00:08:31.664 "num_allocated_clusters": 38, 00:08:31.664 "snapshot": false, 00:08:31.664 "thin_provision": false 00:08:31.664 } 00:08:31.664 }, 00:08:31.664 "name": "b7a7b864-c43c-4e72-a232-cd8800439492", 00:08:31.664 "num_blocks": 38912, 00:08:31.664 "product_name": "Logical Volume", 00:08:31.664 "supported_io_types": { 00:08:31.664 "abort": false, 00:08:31.664 "compare": false, 00:08:31.664 "compare_and_write": false, 00:08:31.664 "copy": false, 00:08:31.664 "flush": false, 00:08:31.664 "get_zone_info": false, 00:08:31.664 "nvme_admin": false, 00:08:31.664 "nvme_io": false, 00:08:31.664 "nvme_io_md": false, 00:08:31.664 "nvme_iov_md": false, 00:08:31.664 "read": true, 00:08:31.664 "reset": true, 00:08:31.664 "seek_data": true, 00:08:31.664 "seek_hole": true, 00:08:31.664 "unmap": true, 00:08:31.664 "write": true, 00:08:31.664 "write_zeroes": true, 00:08:31.664 "zcopy": false, 00:08:31.664 "zone_append": false, 00:08:31.664 "zone_management": false 00:08:31.664 }, 00:08:31.664 "uuid": "b7a7b864-c43c-4e72-a232-cd8800439492", 00:08:31.664 "zoned": false 00:08:31.664 } 00:08:31.664 ] 00:08:31.665 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:31.665 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:31.665 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.922 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.922 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:31.922 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:32.181 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:32.181 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b7a7b864-c43c-4e72-a232-cd8800439492 00:08:32.438 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc5e2fb5-b42d-4f61-ab3d-0baa33ce2c8d 00:08:32.752 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:33.011 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.579 00:08:33.579 real 0m18.331s 00:08:33.579 user 0m16.455s 00:08:33.579 sys 0m3.304s 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:33.579 ************************************ 00:08:33.579 END TEST lvs_grow_clean 00:08:33.579 ************************************ 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.579 ************************************ 00:08:33.579 START TEST lvs_grow_dirty 00:08:33.579 ************************************ 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.579 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.836 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:33.836 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:34.094 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=42591b22-1765-4288-b7be-dad8612355f0 00:08:34.094 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:34.094 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:34.352 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:34.352 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:34.352 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 42591b22-1765-4288-b7be-dad8612355f0 lvol 150 00:08:34.608 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c327cff7-6740-4ee2-ae0c-c9a661ab3dce 00:08:34.608 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:34.608 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:34.865 [2024-11-20 15:16:21.206158] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:34.865 [2024-11-20 15:16:21.206243] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:34.865 true 00:08:34.865 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:34.865 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:35.124 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:35.124 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:35.383 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c327cff7-6740-4ee2-ae0c-c9a661ab3dce 00:08:35.642 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:35.900 [2024-11-20 15:16:22.238449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:35.900 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67189 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67189 /var/tmp/bdevperf.sock 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67189 ']' 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.159 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:36.159 [2024-11-20 15:16:22.566704] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:36.159 [2024-11-20 15:16:22.566816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67189 ] 00:08:36.440 [2024-11-20 15:16:22.704497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.440 [2024-11-20 15:16:22.785953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.030 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.030 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:37.030 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:37.597 Nvme0n1 00:08:37.597 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:37.597 [ 00:08:37.597 { 00:08:37.597 "aliases": [ 00:08:37.597 "c327cff7-6740-4ee2-ae0c-c9a661ab3dce" 00:08:37.597 ], 00:08:37.597 "assigned_rate_limits": { 00:08:37.597 "r_mbytes_per_sec": 0, 00:08:37.597 "rw_ios_per_sec": 0, 00:08:37.597 "rw_mbytes_per_sec": 0, 00:08:37.597 "w_mbytes_per_sec": 0 00:08:37.597 }, 00:08:37.597 "block_size": 4096, 00:08:37.597 "claimed": false, 00:08:37.597 "driver_specific": { 00:08:37.597 "mp_policy": "active_passive", 00:08:37.597 "nvme": [ 00:08:37.597 { 00:08:37.597 "ctrlr_data": { 00:08:37.597 "ana_reporting": false, 00:08:37.597 "cntlid": 1, 00:08:37.597 "firmware_revision": "25.01", 00:08:37.597 "model_number": "SPDK bdev Controller", 00:08:37.597 "multi_ctrlr": true, 00:08:37.597 "oacs": { 00:08:37.597 "firmware": 0, 00:08:37.597 "format": 0, 00:08:37.597 "ns_manage": 0, 00:08:37.597 "security": 0 00:08:37.597 }, 00:08:37.597 "serial_number": "SPDK0", 00:08:37.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:37.597 "vendor_id": "0x8086" 00:08:37.597 }, 00:08:37.597 "ns_data": { 00:08:37.597 "can_share": true, 00:08:37.597 "id": 1 00:08:37.597 }, 00:08:37.597 "trid": { 00:08:37.597 "adrfam": "IPv4", 00:08:37.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:37.597 "traddr": "10.0.0.3", 00:08:37.597 "trsvcid": "4420", 00:08:37.597 "trtype": "TCP" 00:08:37.597 }, 00:08:37.597 "vs": { 00:08:37.597 "nvme_version": "1.3" 00:08:37.597 } 00:08:37.597 } 00:08:37.597 ] 00:08:37.597 }, 00:08:37.597 "memory_domains": [ 00:08:37.597 { 00:08:37.597 "dma_device_id": "system", 00:08:37.597 "dma_device_type": 1 00:08:37.597 } 00:08:37.597 ], 00:08:37.597 "name": "Nvme0n1", 00:08:37.597 "num_blocks": 38912, 00:08:37.597 "numa_id": -1, 00:08:37.597 "product_name": "NVMe disk", 00:08:37.597 "supported_io_types": { 00:08:37.597 "abort": true, 00:08:37.597 "compare": true, 00:08:37.597 "compare_and_write": true, 00:08:37.597 "copy": true, 00:08:37.597 "flush": true, 00:08:37.597 "get_zone_info": false, 00:08:37.597 "nvme_admin": true, 00:08:37.597 "nvme_io": true, 00:08:37.597 "nvme_io_md": false, 00:08:37.597 "nvme_iov_md": false, 00:08:37.597 "read": true, 00:08:37.597 "reset": true, 00:08:37.597 "seek_data": false, 00:08:37.597 "seek_hole": false, 00:08:37.597 "unmap": true, 00:08:37.597 "write": true, 00:08:37.597 "write_zeroes": true, 00:08:37.597 "zcopy": false, 00:08:37.597 "zone_append": false, 00:08:37.597 "zone_management": false 00:08:37.597 }, 00:08:37.597 "uuid": "c327cff7-6740-4ee2-ae0c-c9a661ab3dce", 00:08:37.597 "zoned": false 00:08:37.597 } 00:08:37.597 ] 00:08:37.597 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:37.597 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67231 00:08:37.597 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:37.856 Running I/O for 10 seconds... 00:08:38.790 Latency(us) 00:08:38.790 [2024-11-20T15:16:25.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.790 Nvme0n1 : 1.00 10432.00 40.75 0.00 0.00 0.00 0.00 0.00 00:08:38.790 [2024-11-20T15:16:25.272Z] =================================================================================================================== 00:08:38.790 [2024-11-20T15:16:25.272Z] Total : 10432.00 40.75 0.00 0.00 0.00 0.00 0.00 00:08:38.790 00:08:39.727 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:39.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.727 Nvme0n1 : 2.00 10518.00 41.09 0.00 0.00 0.00 0.00 0.00 00:08:39.727 [2024-11-20T15:16:26.209Z] =================================================================================================================== 00:08:39.727 [2024-11-20T15:16:26.209Z] Total : 10518.00 41.09 0.00 0.00 0.00 0.00 0.00 00:08:39.727 00:08:40.032 true 00:08:40.033 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:40.033 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:40.325 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:40.325 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:40.325 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67231 00:08:40.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.890 Nvme0n1 : 3.00 10322.33 40.32 0.00 0.00 0.00 0.00 0.00 00:08:40.890 [2024-11-20T15:16:27.372Z] =================================================================================================================== 00:08:40.890 [2024-11-20T15:16:27.372Z] Total : 10322.33 40.32 0.00 0.00 0.00 0.00 0.00 00:08:40.890 00:08:41.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.826 Nvme0n1 : 4.00 10242.75 40.01 0.00 0.00 0.00 0.00 0.00 00:08:41.826 [2024-11-20T15:16:28.308Z] =================================================================================================================== 00:08:41.826 [2024-11-20T15:16:28.308Z] Total : 10242.75 40.01 0.00 0.00 0.00 0.00 0.00 00:08:41.826 00:08:42.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.763 Nvme0n1 : 5.00 9672.60 37.78 0.00 0.00 0.00 0.00 0.00 00:08:42.763 [2024-11-20T15:16:29.245Z] =================================================================================================================== 00:08:42.763 [2024-11-20T15:16:29.245Z] Total : 9672.60 37.78 0.00 0.00 0.00 0.00 0.00 00:08:42.763 00:08:43.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.733 Nvme0n1 : 6.00 9428.50 36.83 0.00 0.00 0.00 0.00 0.00 00:08:43.733 [2024-11-20T15:16:30.215Z] =================================================================================================================== 00:08:43.733 [2024-11-20T15:16:30.215Z] Total : 9428.50 36.83 0.00 0.00 0.00 0.00 0.00 00:08:43.733 00:08:44.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.732 Nvme0n1 : 7.00 9314.71 36.39 0.00 0.00 0.00 0.00 0.00 00:08:44.732 [2024-11-20T15:16:31.214Z] =================================================================================================================== 00:08:44.732 [2024-11-20T15:16:31.214Z] Total : 9314.71 36.39 0.00 0.00 0.00 0.00 0.00 00:08:44.732 00:08:45.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.729 Nvme0n1 : 8.00 9329.38 36.44 0.00 0.00 0.00 0.00 0.00 00:08:45.729 [2024-11-20T15:16:32.211Z] =================================================================================================================== 00:08:45.729 [2024-11-20T15:16:32.211Z] Total : 9329.38 36.44 0.00 0.00 0.00 0.00 0.00 00:08:45.729 00:08:46.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.664 Nvme0n1 : 9.00 9368.67 36.60 0.00 0.00 0.00 0.00 0.00 00:08:46.664 [2024-11-20T15:16:33.146Z] =================================================================================================================== 00:08:46.664 [2024-11-20T15:16:33.146Z] Total : 9368.67 36.60 0.00 0.00 0.00 0.00 0.00 00:08:46.664 00:08:48.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.038 Nvme0n1 : 10.00 9407.30 36.75 0.00 0.00 0.00 0.00 0.00 00:08:48.038 [2024-11-20T15:16:34.520Z] =================================================================================================================== 00:08:48.038 [2024-11-20T15:16:34.520Z] Total : 9407.30 36.75 0.00 0.00 0.00 0.00 0.00 00:08:48.038 00:08:48.038 00:08:48.038 Latency(us) 00:08:48.038 [2024-11-20T15:16:34.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.038 Nvme0n1 : 10.00 9410.73 36.76 0.00 0.00 13597.29 4184.83 301518.24 00:08:48.038 [2024-11-20T15:16:34.520Z] =================================================================================================================== 00:08:48.038 [2024-11-20T15:16:34.520Z] Total : 9410.73 36.76 0.00 0.00 13597.29 4184.83 301518.24 00:08:48.038 { 00:08:48.038 "results": [ 00:08:48.038 { 00:08:48.038 "job": "Nvme0n1", 00:08:48.038 "core_mask": "0x2", 00:08:48.038 "workload": "randwrite", 00:08:48.038 "status": "finished", 00:08:48.038 "queue_depth": 128, 00:08:48.038 "io_size": 4096, 00:08:48.039 "runtime": 10.003151, 00:08:48.039 "iops": 9410.734677503118, 00:08:48.039 "mibps": 36.760682333996556, 00:08:48.039 "io_failed": 0, 00:08:48.039 "io_timeout": 0, 00:08:48.039 "avg_latency_us": 13597.292285271833, 00:08:48.039 "min_latency_us": 4184.828915662651, 00:08:48.039 "max_latency_us": 301518.23935742973 00:08:48.039 } 00:08:48.039 ], 00:08:48.039 "core_count": 1 00:08:48.039 } 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67189 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 67189 ']' 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 67189 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67189 00:08:48.039 killing process with pid 67189 00:08:48.039 Received shutdown signal, test time was about 10.000000 seconds 00:08:48.039 00:08:48.039 Latency(us) 00:08:48.039 [2024-11-20T15:16:34.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.039 [2024-11-20T15:16:34.521Z] =================================================================================================================== 00:08:48.039 [2024-11-20T15:16:34.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67189' 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 67189 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 67189 00:08:48.039 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:48.297 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:48.556 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:48.556 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66574 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66574 00:08:48.814 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66574 Killed "${NVMF_APP[@]}" "$@" 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.814 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67399 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67399 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67399 ']' 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.073 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.073 [2024-11-20 15:16:35.365084] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:49.073 [2024-11-20 15:16:35.365421] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.073 [2024-11-20 15:16:35.521210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.331 [2024-11-20 15:16:35.600212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.331 [2024-11-20 15:16:35.600498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.331 [2024-11-20 15:16:35.600517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.331 [2024-11-20 15:16:35.600526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.331 [2024-11-20 15:16:35.600533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.331 [2024-11-20 15:16:35.600954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.898 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.898 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:49.898 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.898 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.898 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.156 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.156 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:50.156 [2024-11-20 15:16:36.616330] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:50.156 [2024-11-20 15:16:36.616967] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:50.156 [2024-11-20 15:16:36.617127] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:50.415 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:50.415 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c327cff7-6740-4ee2-ae0c-c9a661ab3dce 00:08:50.415 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c327cff7-6740-4ee2-ae0c-c9a661ab3dce 00:08:50.415 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.415 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:50.415 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.415 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.415 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:50.673 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c327cff7-6740-4ee2-ae0c-c9a661ab3dce -t 2000 00:08:50.673 [ 00:08:50.673 { 00:08:50.673 "aliases": [ 00:08:50.673 "lvs/lvol" 00:08:50.673 ], 00:08:50.673 "assigned_rate_limits": { 00:08:50.673 "r_mbytes_per_sec": 0, 00:08:50.673 "rw_ios_per_sec": 0, 00:08:50.673 "rw_mbytes_per_sec": 0, 00:08:50.673 "w_mbytes_per_sec": 0 00:08:50.673 }, 00:08:50.673 "block_size": 4096, 00:08:50.673 "claimed": false, 00:08:50.673 "driver_specific": { 00:08:50.673 "lvol": { 00:08:50.673 "base_bdev": "aio_bdev", 00:08:50.673 "clone": false, 00:08:50.673 "esnap_clone": false, 00:08:50.673 "lvol_store_uuid": "42591b22-1765-4288-b7be-dad8612355f0", 00:08:50.673 "num_allocated_clusters": 38, 00:08:50.673 "snapshot": false, 00:08:50.673 "thin_provision": false 00:08:50.673 } 00:08:50.673 }, 00:08:50.673 "name": "c327cff7-6740-4ee2-ae0c-c9a661ab3dce", 00:08:50.673 "num_blocks": 38912, 00:08:50.673 "product_name": "Logical Volume", 00:08:50.673 "supported_io_types": { 00:08:50.673 "abort": false, 00:08:50.673 "compare": false, 00:08:50.673 "compare_and_write": false, 00:08:50.673 "copy": false, 00:08:50.673 "flush": false, 00:08:50.673 "get_zone_info": false, 00:08:50.673 "nvme_admin": false, 00:08:50.673 "nvme_io": false, 00:08:50.673 "nvme_io_md": false, 00:08:50.673 "nvme_iov_md": false, 00:08:50.673 "read": true, 00:08:50.673 "reset": true, 00:08:50.674 "seek_data": true, 00:08:50.674 "seek_hole": true, 00:08:50.674 "unmap": true, 00:08:50.674 "write": true, 00:08:50.674 "write_zeroes": true, 00:08:50.674 "zcopy": false, 00:08:50.674 "zone_append": false, 00:08:50.674 "zone_management": false 00:08:50.674 }, 00:08:50.674 "uuid": "c327cff7-6740-4ee2-ae0c-c9a661ab3dce", 00:08:50.674 "zoned": false 00:08:50.674 } 00:08:50.674 ] 00:08:50.674 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:50.674 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:50.674 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:50.932 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:50.932 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:50.932 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:51.191 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:51.191 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:51.449 [2024-11-20 15:16:37.819777] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:51.449 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:51.449 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:51.449 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:51.449 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.450 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.450 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.450 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.450 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.450 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.450 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.450 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:51.450 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:51.709 2024/11/20 15:16:38 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:42591b22-1765-4288-b7be-dad8612355f0], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:51.709 request: 00:08:51.709 { 00:08:51.709 "method": "bdev_lvol_get_lvstores", 00:08:51.709 "params": { 00:08:51.709 "uuid": "42591b22-1765-4288-b7be-dad8612355f0" 00:08:51.709 } 00:08:51.709 } 00:08:51.709 Got JSON-RPC error response 00:08:51.709 GoRPCClient: error on JSON-RPC call 00:08:51.709 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:51.709 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.709 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:51.709 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.709 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.968 aio_bdev 00:08:51.968 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c327cff7-6740-4ee2-ae0c-c9a661ab3dce 00:08:51.968 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c327cff7-6740-4ee2-ae0c-c9a661ab3dce 00:08:51.968 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.968 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:51.968 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.968 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.968 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:52.227 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c327cff7-6740-4ee2-ae0c-c9a661ab3dce -t 2000 00:08:52.485 [ 00:08:52.485 { 00:08:52.485 "aliases": [ 00:08:52.485 "lvs/lvol" 00:08:52.485 ], 00:08:52.485 "assigned_rate_limits": { 00:08:52.485 "r_mbytes_per_sec": 0, 00:08:52.485 "rw_ios_per_sec": 0, 00:08:52.485 "rw_mbytes_per_sec": 0, 00:08:52.485 "w_mbytes_per_sec": 0 00:08:52.485 }, 00:08:52.485 "block_size": 4096, 00:08:52.485 "claimed": false, 00:08:52.485 "driver_specific": { 00:08:52.485 "lvol": { 00:08:52.485 "base_bdev": "aio_bdev", 00:08:52.485 "clone": false, 00:08:52.485 "esnap_clone": false, 00:08:52.485 "lvol_store_uuid": "42591b22-1765-4288-b7be-dad8612355f0", 00:08:52.485 "num_allocated_clusters": 38, 00:08:52.485 "snapshot": false, 00:08:52.485 "thin_provision": false 00:08:52.485 } 00:08:52.485 }, 00:08:52.485 "name": "c327cff7-6740-4ee2-ae0c-c9a661ab3dce", 00:08:52.485 "num_blocks": 38912, 00:08:52.485 "product_name": "Logical Volume", 00:08:52.485 "supported_io_types": { 00:08:52.485 "abort": false, 00:08:52.485 "compare": false, 00:08:52.485 "compare_and_write": false, 00:08:52.485 "copy": false, 00:08:52.485 "flush": false, 00:08:52.485 "get_zone_info": false, 00:08:52.485 "nvme_admin": false, 00:08:52.485 "nvme_io": false, 00:08:52.485 "nvme_io_md": false, 00:08:52.485 "nvme_iov_md": false, 00:08:52.485 "read": true, 00:08:52.485 "reset": true, 00:08:52.486 "seek_data": true, 00:08:52.486 "seek_hole": true, 00:08:52.486 "unmap": true, 00:08:52.486 "write": true, 00:08:52.486 "write_zeroes": true, 00:08:52.486 "zcopy": false, 00:08:52.486 "zone_append": false, 00:08:52.486 "zone_management": false 00:08:52.486 }, 00:08:52.486 "uuid": "c327cff7-6740-4ee2-ae0c-c9a661ab3dce", 00:08:52.486 "zoned": false 00:08:52.486 } 00:08:52.486 ] 00:08:52.486 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:52.486 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:52.486 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:52.745 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:52.745 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:52.745 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:53.001 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:53.001 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c327cff7-6740-4ee2-ae0c-c9a661ab3dce 00:08:53.259 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42591b22-1765-4288-b7be-dad8612355f0 00:08:53.518 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.777 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:54.038 00:08:54.038 real 0m20.602s 00:08:54.038 user 0m40.742s 00:08:54.038 sys 0m8.284s 00:08:54.038 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.038 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.038 ************************************ 00:08:54.038 END TEST lvs_grow_dirty 00:08:54.038 ************************************ 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:54.297 nvmf_trace.0 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.297 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.556 rmmod nvme_tcp 00:08:54.556 rmmod nvme_fabrics 00:08:54.556 rmmod nvme_keyring 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67399 ']' 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67399 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67399 ']' 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67399 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.556 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67399 00:08:54.556 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.556 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.556 killing process with pid 67399 00:08:54.556 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67399' 00:08:54.556 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67399 00:08:54.556 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67399 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:54.815 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.074 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:55.333 00:08:55.333 real 0m42.258s 00:08:55.333 user 1m3.825s 00:08:55.333 sys 0m12.898s 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.333 ************************************ 00:08:55.333 END TEST nvmf_lvs_grow 00:08:55.333 ************************************ 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.333 ************************************ 00:08:55.333 START TEST nvmf_bdev_io_wait 00:08:55.333 ************************************ 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:55.333 * Looking for test storage... 00:08:55.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.333 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.593 --rc genhtml_branch_coverage=1 00:08:55.593 --rc genhtml_function_coverage=1 00:08:55.593 --rc genhtml_legend=1 00:08:55.593 --rc geninfo_all_blocks=1 00:08:55.593 --rc geninfo_unexecuted_blocks=1 00:08:55.593 00:08:55.593 ' 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.593 --rc genhtml_branch_coverage=1 00:08:55.593 --rc genhtml_function_coverage=1 00:08:55.593 --rc genhtml_legend=1 00:08:55.593 --rc geninfo_all_blocks=1 00:08:55.593 --rc geninfo_unexecuted_blocks=1 00:08:55.593 00:08:55.593 ' 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.593 --rc genhtml_branch_coverage=1 00:08:55.593 --rc genhtml_function_coverage=1 00:08:55.593 --rc genhtml_legend=1 00:08:55.593 --rc geninfo_all_blocks=1 00:08:55.593 --rc geninfo_unexecuted_blocks=1 00:08:55.593 00:08:55.593 ' 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.593 --rc genhtml_branch_coverage=1 00:08:55.593 --rc genhtml_function_coverage=1 00:08:55.593 --rc genhtml_legend=1 00:08:55.593 --rc geninfo_all_blocks=1 00:08:55.593 --rc geninfo_unexecuted_blocks=1 00:08:55.593 00:08:55.593 ' 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.593 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.594 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:55.594 Cannot find device "nvmf_init_br" 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:55.594 Cannot find device "nvmf_init_br2" 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:55.594 Cannot find device "nvmf_tgt_br" 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.594 Cannot find device "nvmf_tgt_br2" 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:55.594 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:55.594 Cannot find device "nvmf_init_br" 00:08:55.594 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:55.594 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:55.594 Cannot find device "nvmf_init_br2" 00:08:55.594 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:55.594 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:55.594 Cannot find device "nvmf_tgt_br" 00:08:55.594 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:55.594 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:55.594 Cannot find device "nvmf_tgt_br2" 00:08:55.594 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:55.594 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:55.854 Cannot find device "nvmf_br" 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:55.854 Cannot find device "nvmf_init_if" 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:55.854 Cannot find device "nvmf_init_if2" 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:55.854 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:56.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:56.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.162 ms 00:08:56.114 00:08:56.114 --- 10.0.0.3 ping statistics --- 00:08:56.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.114 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:56.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:56.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:08:56.114 00:08:56.114 --- 10.0.0.4 ping statistics --- 00:08:56.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.114 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:56.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:56.114 00:08:56.114 --- 10.0.0.1 ping statistics --- 00:08:56.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.114 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:56.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:08:56.114 00:08:56.114 --- 10.0.0.2 ping statistics --- 00:08:56.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.114 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67873 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67873 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67873 ']' 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.114 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.114 [2024-11-20 15:16:42.534538] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:56.114 [2024-11-20 15:16:42.534622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.374 [2024-11-20 15:16:42.687273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.374 [2024-11-20 15:16:42.771027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.374 [2024-11-20 15:16:42.771483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.374 [2024-11-20 15:16:42.771770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.374 [2024-11-20 15:16:42.771944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.374 [2024-11-20 15:16:42.772075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.374 [2024-11-20 15:16:42.773547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.374 [2024-11-20 15:16:42.773743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.374 [2024-11-20 15:16:42.773801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.374 [2024-11-20 15:16:42.773806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 [2024-11-20 15:16:43.703050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 Malloc0 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 [2024-11-20 15:16:43.773572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:57.310 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67926 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67928 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.311 { 00:08:57.311 "params": { 00:08:57.311 "name": "Nvme$subsystem", 00:08:57.311 "trtype": "$TEST_TRANSPORT", 00:08:57.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.311 "adrfam": "ipv4", 00:08:57.311 "trsvcid": "$NVMF_PORT", 00:08:57.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.311 "hdgst": ${hdgst:-false}, 00:08:57.311 "ddgst": ${ddgst:-false} 00:08:57.311 }, 00:08:57.311 "method": "bdev_nvme_attach_controller" 00:08:57.311 } 00:08:57.311 EOF 00:08:57.311 )") 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67930 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.311 { 00:08:57.311 "params": { 00:08:57.311 "name": "Nvme$subsystem", 00:08:57.311 "trtype": "$TEST_TRANSPORT", 00:08:57.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.311 "adrfam": "ipv4", 00:08:57.311 "trsvcid": "$NVMF_PORT", 00:08:57.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.311 "hdgst": ${hdgst:-false}, 00:08:57.311 "ddgst": ${ddgst:-false} 00:08:57.311 }, 00:08:57.311 "method": "bdev_nvme_attach_controller" 00:08:57.311 } 00:08:57.311 EOF 00:08:57.311 )") 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67933 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.311 { 00:08:57.311 "params": { 00:08:57.311 "name": "Nvme$subsystem", 00:08:57.311 "trtype": "$TEST_TRANSPORT", 00:08:57.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.311 "adrfam": "ipv4", 00:08:57.311 "trsvcid": "$NVMF_PORT", 00:08:57.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.311 "hdgst": ${hdgst:-false}, 00:08:57.311 "ddgst": ${ddgst:-false} 00:08:57.311 }, 00:08:57.311 "method": "bdev_nvme_attach_controller" 00:08:57.311 } 00:08:57.311 EOF 00:08:57.311 )") 00:08:57.311 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.570 { 00:08:57.570 "params": { 00:08:57.570 "name": "Nvme$subsystem", 00:08:57.570 "trtype": "$TEST_TRANSPORT", 00:08:57.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.570 "adrfam": "ipv4", 00:08:57.570 "trsvcid": "$NVMF_PORT", 00:08:57.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.570 "hdgst": ${hdgst:-false}, 00:08:57.570 "ddgst": ${ddgst:-false} 00:08:57.570 }, 00:08:57.570 "method": "bdev_nvme_attach_controller" 00:08:57.570 } 00:08:57.570 EOF 00:08:57.570 )") 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.570 "params": { 00:08:57.570 "name": "Nvme1", 00:08:57.570 "trtype": "tcp", 00:08:57.570 "traddr": "10.0.0.3", 00:08:57.570 "adrfam": "ipv4", 00:08:57.570 "trsvcid": "4420", 00:08:57.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.570 "hdgst": false, 00:08:57.570 "ddgst": false 00:08:57.570 }, 00:08:57.570 "method": "bdev_nvme_attach_controller" 00:08:57.570 }' 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.570 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.570 "params": { 00:08:57.570 "name": "Nvme1", 00:08:57.570 "trtype": "tcp", 00:08:57.570 "traddr": "10.0.0.3", 00:08:57.570 "adrfam": "ipv4", 00:08:57.570 "trsvcid": "4420", 00:08:57.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.570 "hdgst": false, 00:08:57.570 "ddgst": false 00:08:57.570 }, 00:08:57.571 "method": "bdev_nvme_attach_controller" 00:08:57.571 }' 00:08:57.571 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.571 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.571 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.571 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.571 "params": { 00:08:57.571 "name": "Nvme1", 00:08:57.571 "trtype": "tcp", 00:08:57.571 "traddr": "10.0.0.3", 00:08:57.571 "adrfam": "ipv4", 00:08:57.571 "trsvcid": "4420", 00:08:57.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.571 "hdgst": false, 00:08:57.571 "ddgst": false 00:08:57.571 }, 00:08:57.571 "method": "bdev_nvme_attach_controller" 00:08:57.571 }' 00:08:57.571 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.571 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.571 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.571 "params": { 00:08:57.571 "name": "Nvme1", 00:08:57.571 "trtype": "tcp", 00:08:57.571 "traddr": "10.0.0.3", 00:08:57.571 "adrfam": "ipv4", 00:08:57.571 "trsvcid": "4420", 00:08:57.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.571 "hdgst": false, 00:08:57.571 "ddgst": false 00:08:57.571 }, 00:08:57.571 "method": "bdev_nvme_attach_controller" 00:08:57.571 }' 00:08:57.571 [2024-11-20 15:16:43.837650] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:57.571 [2024-11-20 15:16:43.837732] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:57.571 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67926 00:08:57.571 [2024-11-20 15:16:43.857807] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:57.571 [2024-11-20 15:16:43.857887] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:57.571 [2024-11-20 15:16:43.860193] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:57.571 [2024-11-20 15:16:43.860253] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:57.571 [2024-11-20 15:16:43.865291] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:08:57.571 [2024-11-20 15:16:43.865477] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:57.829 [2024-11-20 15:16:44.070310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.829 [2024-11-20 15:16:44.134477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:57.829 [2024-11-20 15:16:44.181726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.829 [2024-11-20 15:16:44.261339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:57.829 [2024-11-20 15:16:44.300397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.087 [2024-11-20 15:16:44.367734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:58.087 [2024-11-20 15:16:44.403255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.087 Running I/O for 1 seconds... 00:08:58.087 Running I/O for 1 seconds... 00:08:58.087 [2024-11-20 15:16:44.468266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:58.087 Running I/O for 1 seconds... 00:08:58.346 Running I/O for 1 seconds... 00:08:59.280 8098.00 IOPS, 31.63 MiB/s 00:08:59.280 Latency(us) 00:08:59.280 [2024-11-20T15:16:45.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.280 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:59.280 Nvme1n1 : 1.02 8083.07 31.57 0.00 0.00 15652.48 7001.03 24845.78 00:08:59.280 [2024-11-20T15:16:45.762Z] =================================================================================================================== 00:08:59.280 [2024-11-20T15:16:45.762Z] Total : 8083.07 31.57 0.00 0.00 15652.48 7001.03 24845.78 00:08:59.280 228016.00 IOPS, 890.69 MiB/s 00:08:59.280 Latency(us) 00:08:59.280 [2024-11-20T15:16:45.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.280 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:59.280 Nvme1n1 : 1.00 227669.52 889.33 0.00 0.00 559.17 233.59 2079.25 00:08:59.280 [2024-11-20T15:16:45.762Z] =================================================================================================================== 00:08:59.280 [2024-11-20T15:16:45.762Z] Total : 227669.52 889.33 0.00 0.00 559.17 233.59 2079.25 00:08:59.280 10047.00 IOPS, 39.25 MiB/s 00:08:59.280 Latency(us) 00:08:59.280 [2024-11-20T15:16:45.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.280 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:59.280 Nvme1n1 : 1.01 10090.18 39.41 0.00 0.00 12628.30 4158.51 16949.87 00:08:59.280 [2024-11-20T15:16:45.762Z] =================================================================================================================== 00:08:59.280 [2024-11-20T15:16:45.762Z] Total : 10090.18 39.41 0.00 0.00 12628.30 4158.51 16949.87 00:08:59.280 8153.00 IOPS, 31.85 MiB/s 00:08:59.280 Latency(us) 00:08:59.280 [2024-11-20T15:16:45.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.280 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:59.280 Nvme1n1 : 1.01 8273.82 32.32 0.00 0.00 15433.71 3500.52 39795.35 00:08:59.280 [2024-11-20T15:16:45.762Z] =================================================================================================================== 00:08:59.280 [2024-11-20T15:16:45.762Z] Total : 8273.82 32.32 0.00 0.00 15433.71 3500.52 39795.35 00:08:59.280 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67928 00:08:59.280 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67930 00:08:59.280 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67933 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.538 rmmod nvme_tcp 00:08:59.538 rmmod nvme_fabrics 00:08:59.538 rmmod nvme_keyring 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67873 ']' 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67873 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67873 ']' 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67873 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.538 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67873 00:08:59.538 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.538 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.538 killing process with pid 67873 00:08:59.538 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67873' 00:08:59.538 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67873 00:08:59.538 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67873 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:59.835 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.095 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.354 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:00.354 00:09:00.354 real 0m4.930s 00:09:00.354 user 0m18.975s 00:09:00.354 sys 0m2.656s 00:09:00.354 ************************************ 00:09:00.354 END TEST nvmf_bdev_io_wait 00:09:00.354 ************************************ 00:09:00.354 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.354 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.354 15:16:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.354 15:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.354 15:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.354 15:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.354 ************************************ 00:09:00.354 START TEST nvmf_queue_depth 00:09:00.354 ************************************ 00:09:00.355 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.355 * Looking for test storage... 00:09:00.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.355 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.355 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.355 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.614 --rc genhtml_branch_coverage=1 00:09:00.614 --rc genhtml_function_coverage=1 00:09:00.614 --rc genhtml_legend=1 00:09:00.614 --rc geninfo_all_blocks=1 00:09:00.614 --rc geninfo_unexecuted_blocks=1 00:09:00.614 00:09:00.614 ' 00:09:00.614 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.614 --rc genhtml_branch_coverage=1 00:09:00.614 --rc genhtml_function_coverage=1 00:09:00.615 --rc genhtml_legend=1 00:09:00.615 --rc geninfo_all_blocks=1 00:09:00.615 --rc geninfo_unexecuted_blocks=1 00:09:00.615 00:09:00.615 ' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.615 --rc genhtml_branch_coverage=1 00:09:00.615 --rc genhtml_function_coverage=1 00:09:00.615 --rc genhtml_legend=1 00:09:00.615 --rc geninfo_all_blocks=1 00:09:00.615 --rc geninfo_unexecuted_blocks=1 00:09:00.615 00:09:00.615 ' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.615 --rc genhtml_branch_coverage=1 00:09:00.615 --rc genhtml_function_coverage=1 00:09:00.615 --rc genhtml_legend=1 00:09:00.615 --rc geninfo_all_blocks=1 00:09:00.615 --rc geninfo_unexecuted_blocks=1 00:09:00.615 00:09:00.615 ' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.615 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:00.615 Cannot find device "nvmf_init_br" 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:00.615 Cannot find device "nvmf_init_br2" 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:00.615 Cannot find device "nvmf_tgt_br" 00:09:00.615 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:00.616 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.616 Cannot find device "nvmf_tgt_br2" 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:00.616 Cannot find device "nvmf_init_br" 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:00.616 Cannot find device "nvmf_init_br2" 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:00.616 Cannot find device "nvmf_tgt_br" 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:00.616 Cannot find device "nvmf_tgt_br2" 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:00.616 Cannot find device "nvmf_br" 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:00.616 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:00.875 Cannot find device "nvmf_init_if" 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:00.875 Cannot find device "nvmf_init_if2" 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:00.875 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:01.135 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.135 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.148 ms 00:09:01.135 00:09:01.135 --- 10.0.0.3 ping statistics --- 00:09:01.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.135 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:01.135 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:01.135 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:09:01.135 00:09:01.135 --- 10.0.0.4 ping statistics --- 00:09:01.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.135 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:01.135 00:09:01.135 --- 10.0.0.1 ping statistics --- 00:09:01.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.135 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:01.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:09:01.135 00:09:01.135 --- 10.0.0.2 ping statistics --- 00:09:01.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.135 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68226 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68226 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68226 ']' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.135 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.135 [2024-11-20 15:16:47.543897] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:01.135 [2024-11-20 15:16:47.543987] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.393 [2024-11-20 15:16:47.690298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.393 [2024-11-20 15:16:47.770833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.393 [2024-11-20 15:16:47.770902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.393 [2024-11-20 15:16:47.770913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.393 [2024-11-20 15:16:47.770922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.393 [2024-11-20 15:16:47.770929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.393 [2024-11-20 15:16:47.771325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.328 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.328 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:02.328 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.328 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.328 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.328 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.329 [2024-11-20 15:16:48.509840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.329 Malloc0 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.329 [2024-11-20 15:16:48.583980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68276 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68276 /var/tmp/bdevperf.sock 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68276 ']' 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.329 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.329 [2024-11-20 15:16:48.647695] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:02.329 [2024-11-20 15:16:48.647786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68276 ] 00:09:02.329 [2024-11-20 15:16:48.800407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.587 [2024-11-20 15:16:48.879640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.153 15:16:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.153 15:16:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.153 15:16:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.153 15:16:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.153 15:16:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.153 NVMe0n1 00:09:03.153 15:16:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.153 15:16:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.410 Running I/O for 10 seconds... 00:09:05.302 10804.00 IOPS, 42.20 MiB/s [2024-11-20T15:16:53.161Z] 11239.00 IOPS, 43.90 MiB/s [2024-11-20T15:16:53.730Z] 11296.33 IOPS, 44.13 MiB/s [2024-11-20T15:16:55.108Z] 11417.75 IOPS, 44.60 MiB/s [2024-11-20T15:16:56.045Z] 11472.40 IOPS, 44.81 MiB/s [2024-11-20T15:16:57.015Z] 11559.00 IOPS, 45.15 MiB/s [2024-11-20T15:16:57.947Z] 11557.43 IOPS, 45.15 MiB/s [2024-11-20T15:16:58.887Z] 11584.62 IOPS, 45.25 MiB/s [2024-11-20T15:16:59.822Z] 11629.33 IOPS, 45.43 MiB/s [2024-11-20T15:16:59.822Z] 11709.20 IOPS, 45.74 MiB/s 00:09:13.340 Latency(us) 00:09:13.340 [2024-11-20T15:16:59.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.340 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:13.340 Verification LBA range: start 0x0 length 0x4000 00:09:13.340 NVMe0n1 : 10.05 11742.49 45.87 0.00 0.00 86891.73 14528.46 58956.08 00:09:13.340 [2024-11-20T15:16:59.822Z] =================================================================================================================== 00:09:13.340 [2024-11-20T15:16:59.822Z] Total : 11742.49 45.87 0.00 0.00 86891.73 14528.46 58956.08 00:09:13.340 { 00:09:13.340 "results": [ 00:09:13.340 { 00:09:13.340 "job": "NVMe0n1", 00:09:13.340 "core_mask": "0x1", 00:09:13.340 "workload": "verify", 00:09:13.340 "status": "finished", 00:09:13.340 "verify_range": { 00:09:13.340 "start": 0, 00:09:13.340 "length": 16384 00:09:13.340 }, 00:09:13.340 "queue_depth": 1024, 00:09:13.340 "io_size": 4096, 00:09:13.340 "runtime": 10.054086, 00:09:13.340 "iops": 11742.489570906793, 00:09:13.340 "mibps": 45.86909988635466, 00:09:13.340 "io_failed": 0, 00:09:13.340 "io_timeout": 0, 00:09:13.340 "avg_latency_us": 86891.73157554494, 00:09:13.340 "min_latency_us": 14528.462650602409, 00:09:13.340 "max_latency_us": 58956.08032128514 00:09:13.340 } 00:09:13.340 ], 00:09:13.340 "core_count": 1 00:09:13.340 } 00:09:13.340 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68276 00:09:13.340 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68276 ']' 00:09:13.340 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68276 00:09:13.340 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:13.340 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.340 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68276 00:09:13.598 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.599 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.599 killing process with pid 68276 00:09:13.599 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68276' 00:09:13.599 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.599 00:09:13.599 Latency(us) 00:09:13.599 [2024-11-20T15:17:00.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.599 [2024-11-20T15:17:00.081Z] =================================================================================================================== 00:09:13.599 [2024-11-20T15:17:00.081Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.599 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68276 00:09:13.599 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68276 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.859 rmmod nvme_tcp 00:09:13.859 rmmod nvme_fabrics 00:09:13.859 rmmod nvme_keyring 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68226 ']' 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68226 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68226 ']' 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68226 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68226 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:13.859 killing process with pid 68226 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68226' 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68226 00:09:13.859 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68226 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:14.119 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.399 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.692 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:14.692 00:09:14.692 real 0m14.222s 00:09:14.692 user 0m23.177s 00:09:14.692 sys 0m2.747s 00:09:14.692 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.692 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.692 ************************************ 00:09:14.692 END TEST nvmf_queue_depth 00:09:14.692 ************************************ 00:09:14.692 15:17:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:14.692 15:17:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.692 15:17:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.692 15:17:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.692 ************************************ 00:09:14.692 START TEST nvmf_target_multipath 00:09:14.692 ************************************ 00:09:14.692 15:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:14.692 * Looking for test storage... 00:09:14.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.692 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:14.693 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:14.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.953 --rc genhtml_branch_coverage=1 00:09:14.953 --rc genhtml_function_coverage=1 00:09:14.953 --rc genhtml_legend=1 00:09:14.953 --rc geninfo_all_blocks=1 00:09:14.953 --rc geninfo_unexecuted_blocks=1 00:09:14.953 00:09:14.953 ' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:14.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.953 --rc genhtml_branch_coverage=1 00:09:14.953 --rc genhtml_function_coverage=1 00:09:14.953 --rc genhtml_legend=1 00:09:14.953 --rc geninfo_all_blocks=1 00:09:14.953 --rc geninfo_unexecuted_blocks=1 00:09:14.953 00:09:14.953 ' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:14.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.953 --rc genhtml_branch_coverage=1 00:09:14.953 --rc genhtml_function_coverage=1 00:09:14.953 --rc genhtml_legend=1 00:09:14.953 --rc geninfo_all_blocks=1 00:09:14.953 --rc geninfo_unexecuted_blocks=1 00:09:14.953 00:09:14.953 ' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:14.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.953 --rc genhtml_branch_coverage=1 00:09:14.953 --rc genhtml_function_coverage=1 00:09:14.953 --rc genhtml_legend=1 00:09:14.953 --rc geninfo_all_blocks=1 00:09:14.953 --rc geninfo_unexecuted_blocks=1 00:09:14.953 00:09:14.953 ' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.953 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:14.953 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:14.954 Cannot find device "nvmf_init_br" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:14.954 Cannot find device "nvmf_init_br2" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:14.954 Cannot find device "nvmf_tgt_br" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.954 Cannot find device "nvmf_tgt_br2" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:14.954 Cannot find device "nvmf_init_br" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:14.954 Cannot find device "nvmf_init_br2" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:14.954 Cannot find device "nvmf_tgt_br" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:14.954 Cannot find device "nvmf_tgt_br2" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:14.954 Cannot find device "nvmf_br" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:14.954 Cannot find device "nvmf_init_if" 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:14.954 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:14.954 Cannot find device "nvmf_init_if2" 00:09:15.213 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:15.213 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.213 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:15.213 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:15.214 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:15.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:15.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:09:15.473 00:09:15.473 --- 10.0.0.3 ping statistics --- 00:09:15.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.473 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:15.473 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:15.473 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:15.474 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:09:15.474 00:09:15.474 --- 10.0.0.4 ping statistics --- 00:09:15.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.474 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:15.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:09:15.474 00:09:15.474 --- 10.0.0.1 ping statistics --- 00:09:15.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.474 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:15.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:09:15.474 00:09:15.474 --- 10.0.0.2 ping statistics --- 00:09:15.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.474 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68667 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68667 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 68667 ']' 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.474 15:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.474 [2024-11-20 15:17:01.885503] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:15.474 [2024-11-20 15:17:01.885599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.733 [2024-11-20 15:17:02.038718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.733 [2024-11-20 15:17:02.122603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.733 [2024-11-20 15:17:02.122668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.733 [2024-11-20 15:17:02.122687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.733 [2024-11-20 15:17:02.122696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.733 [2024-11-20 15:17:02.122703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.733 [2024-11-20 15:17:02.124112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.733 [2024-11-20 15:17:02.124191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.733 [2024-11-20 15:17:02.124271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.733 [2024-11-20 15:17:02.124275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.672 15:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.672 15:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:16.672 15:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.672 15:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.673 15:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:16.673 15:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.673 15:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.673 [2024-11-20 15:17:03.052872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.673 15:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:16.932 Malloc0 00:09:16.932 15:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:17.190 15:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.448 15:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:17.707 [2024-11-20 15:17:03.979233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:17.707 15:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:17.965 [2024-11-20 15:17:04.199013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:17.965 15:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:17.965 15:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:18.225 15:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.225 15:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:18.225 15:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.225 15:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:18.225 15:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68809 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:20.756 15:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:20.756 [global] 00:09:20.756 thread=1 00:09:20.756 invalidate=1 00:09:20.756 rw=randrw 00:09:20.756 time_based=1 00:09:20.756 runtime=6 00:09:20.756 ioengine=libaio 00:09:20.756 direct=1 00:09:20.756 bs=4096 00:09:20.756 iodepth=128 00:09:20.756 norandommap=0 00:09:20.756 numjobs=1 00:09:20.756 00:09:20.756 verify_dump=1 00:09:20.756 verify_backlog=512 00:09:20.756 verify_state_save=0 00:09:20.756 do_verify=1 00:09:20.756 verify=crc32c-intel 00:09:20.756 [job0] 00:09:20.756 filename=/dev/nvme0n1 00:09:20.756 Could not set queue depth (nvme0n1) 00:09:20.756 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.756 fio-3.35 00:09:20.756 Starting 1 thread 00:09:21.323 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:21.582 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.841 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:22.777 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:22.777 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.777 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:22.778 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:23.037 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:23.297 15:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:24.232 15:17:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:24.233 15:17:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:24.233 15:17:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:24.233 15:17:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68809 00:09:26.769 00:09:26.769 job0: (groupid=0, jobs=1): err= 0: pid=68830: Wed Nov 20 15:17:13 2024 00:09:26.769 read: IOPS=14.0k, BW=54.8MiB/s (57.4MB/s)(329MiB/6003msec) 00:09:26.769 slat (usec): min=3, max=4210, avg=38.12, stdev=150.95 00:09:26.769 clat (usec): min=326, max=47442, avg=6216.41, stdev=1113.54 00:09:26.769 lat (usec): min=389, max=47458, avg=6254.53, stdev=1116.77 00:09:26.769 clat percentiles (usec): 00:09:26.769 | 1.00th=[ 3785], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 5473], 00:09:26.769 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6128], 60.00th=[ 6390], 00:09:26.769 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 7242], 95.00th=[ 8094], 00:09:26.769 | 99.00th=[ 9503], 99.50th=[10028], 99.90th=[12387], 99.95th=[13829], 00:09:26.769 | 99.99th=[15270] 00:09:26.769 bw ( KiB/s): min=12088, max=35193, per=51.58%, avg=28930.27, stdev=8018.44, samples=11 00:09:26.769 iops : min= 3022, max= 8798, avg=7232.55, stdev=2004.59, samples=11 00:09:26.769 write: IOPS=8446, BW=33.0MiB/s (34.6MB/s)(174MiB/5271msec); 0 zone resets 00:09:26.769 slat (usec): min=9, max=3182, avg=50.63, stdev=98.53 00:09:26.769 clat (usec): min=171, max=14718, avg=5328.65, stdev=936.86 00:09:26.769 lat (usec): min=321, max=14747, avg=5379.29, stdev=938.69 00:09:26.769 clat percentiles (usec): 00:09:26.769 | 1.00th=[ 2868], 5.00th=[ 3851], 10.00th=[ 4359], 20.00th=[ 4752], 00:09:26.769 | 30.00th=[ 5014], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5473], 00:09:26.769 | 70.00th=[ 5669], 80.00th=[ 5866], 90.00th=[ 6128], 95.00th=[ 6587], 00:09:26.769 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[11207], 99.95th=[13698], 00:09:26.769 | 99.99th=[14222] 00:09:26.769 bw ( KiB/s): min=12776, max=35944, per=85.75%, avg=28972.36, stdev=7623.31, samples=11 00:09:26.769 iops : min= 3194, max= 8986, avg=7243.09, stdev=1905.83, samples=11 00:09:26.769 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:09:26.769 lat (msec) : 2=0.18%, 4=2.96%, 10=96.42%, 20=0.40%, 50=0.01% 00:09:26.769 cpu : usr=7.46%, sys=31.96%, ctx=9536, majf=0, minf=102 00:09:26.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:26.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.769 issued rwts: total=84179,44524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.769 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.769 00:09:26.769 Run status group 0 (all jobs): 00:09:26.769 READ: bw=54.8MiB/s (57.4MB/s), 54.8MiB/s-54.8MiB/s (57.4MB/s-57.4MB/s), io=329MiB (345MB), run=6003-6003msec 00:09:26.769 WRITE: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=174MiB (182MB), run=5271-5271msec 00:09:26.769 00:09:26.769 Disk stats (read/write): 00:09:26.769 nvme0n1: ios=83264/43528, merge=0/0, ticks=463705/201888, in_queue=665593, util=98.63% 00:09:26.769 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:27.028 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:09:27.287 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:28.221 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:28.222 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:28.222 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:28.222 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:28.222 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68960 00:09:28.222 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:28.222 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:28.222 [global] 00:09:28.222 thread=1 00:09:28.222 invalidate=1 00:09:28.222 rw=randrw 00:09:28.222 time_based=1 00:09:28.222 runtime=6 00:09:28.222 ioengine=libaio 00:09:28.222 direct=1 00:09:28.222 bs=4096 00:09:28.222 iodepth=128 00:09:28.222 norandommap=0 00:09:28.222 numjobs=1 00:09:28.222 00:09:28.222 verify_dump=1 00:09:28.222 verify_backlog=512 00:09:28.222 verify_state_save=0 00:09:28.222 do_verify=1 00:09:28.222 verify=crc32c-intel 00:09:28.222 [job0] 00:09:28.222 filename=/dev/nvme0n1 00:09:28.222 Could not set queue depth (nvme0n1) 00:09:28.481 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.481 fio-3.35 00:09:28.481 Starting 1 thread 00:09:29.435 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:29.435 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:29.721 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:30.658 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:30.658 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:30.658 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:30.658 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:30.916 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:31.175 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:31.175 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:31.175 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:31.176 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:32.112 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:32.112 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:32.112 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:32.112 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68960 00:09:34.645 00:09:34.645 job0: (groupid=0, jobs=1): err= 0: pid=68981: Wed Nov 20 15:17:20 2024 00:09:34.645 read: IOPS=14.9k, BW=58.1MiB/s (60.9MB/s)(349MiB/6005msec) 00:09:34.645 slat (usec): min=3, max=5853, avg=32.56, stdev=134.19 00:09:34.645 clat (usec): min=239, max=17900, avg=5956.65, stdev=1406.36 00:09:34.645 lat (usec): min=252, max=18028, avg=5989.21, stdev=1413.68 00:09:34.645 clat percentiles (usec): 00:09:34.645 | 1.00th=[ 2114], 5.00th=[ 3458], 10.00th=[ 4228], 20.00th=[ 5145], 00:09:34.645 | 30.00th=[ 5538], 40.00th=[ 5800], 50.00th=[ 6063], 60.00th=[ 6259], 00:09:34.645 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7242], 95.00th=[ 8029], 00:09:34.645 | 99.00th=[ 9765], 99.50th=[10421], 99.90th=[16057], 99.95th=[16450], 00:09:34.645 | 99.99th=[17433] 00:09:34.645 bw ( KiB/s): min=12424, max=42728, per=51.26%, avg=30478.27, stdev=9261.46, samples=11 00:09:34.645 iops : min= 3106, max=10682, avg=7619.55, stdev=2315.36, samples=11 00:09:34.645 write: IOPS=8940, BW=34.9MiB/s (36.6MB/s)(180MiB/5148msec); 0 zone resets 00:09:34.645 slat (usec): min=11, max=1558, avg=45.37, stdev=84.14 00:09:34.645 clat (usec): min=235, max=16878, avg=4986.78, stdev=1258.16 00:09:34.645 lat (usec): min=257, max=16928, avg=5032.15, stdev=1265.69 00:09:34.645 clat percentiles (usec): 00:09:34.645 | 1.00th=[ 1762], 5.00th=[ 2638], 10.00th=[ 3195], 20.00th=[ 4080], 00:09:34.645 | 30.00th=[ 4621], 40.00th=[ 4948], 50.00th=[ 5211], 60.00th=[ 5407], 00:09:34.645 | 70.00th=[ 5604], 80.00th=[ 5800], 90.00th=[ 6128], 95.00th=[ 6521], 00:09:34.645 | 99.00th=[ 8586], 99.50th=[ 9241], 99.90th=[10814], 99.95th=[11469], 00:09:34.645 | 99.99th=[12387] 00:09:34.645 bw ( KiB/s): min=12984, max=42080, per=85.45%, avg=30559.09, stdev=9106.37, samples=11 00:09:34.645 iops : min= 3246, max=10520, avg=7639.73, stdev=2276.58, samples=11 00:09:34.645 lat (usec) : 250=0.01%, 500=0.07%, 750=0.11%, 1000=0.11% 00:09:34.645 lat (msec) : 2=0.82%, 4=10.79%, 10=87.52%, 20=0.56% 00:09:34.645 cpu : usr=7.33%, sys=33.33%, ctx=10584, majf=0, minf=127 00:09:34.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:34.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.645 issued rwts: total=89260,46026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.645 00:09:34.645 Run status group 0 (all jobs): 00:09:34.645 READ: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=349MiB (366MB), run=6005-6005msec 00:09:34.645 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=180MiB (189MB), run=5148-5148msec 00:09:34.645 00:09:34.645 Disk stats (read/write): 00:09:34.645 nvme0n1: ios=88217/45308, merge=0/0, ticks=468649/197923, in_queue=666572, util=98.58% 00:09:34.645 15:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:34.645 15:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.645 15:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:34.645 15:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:34.645 15:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.645 15:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:34.645 15:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.645 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:34.645 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.903 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:34.903 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:34.903 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:34.903 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:34.903 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.904 rmmod nvme_tcp 00:09:34.904 rmmod nvme_fabrics 00:09:34.904 rmmod nvme_keyring 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68667 ']' 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68667 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 68667 ']' 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 68667 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.904 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68667 00:09:35.162 killing process with pid 68667 00:09:35.162 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.162 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.162 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68667' 00:09:35.162 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 68667 00:09:35.162 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 68667 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:35.421 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:35.422 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:35.422 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:35.422 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:35.422 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:35.422 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.681 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.681 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:35.681 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.681 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.681 15:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.681 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:35.681 00:09:35.681 real 0m21.057s 00:09:35.681 user 1m18.972s 00:09:35.681 sys 0m9.008s 00:09:35.681 ************************************ 00:09:35.681 END TEST nvmf_target_multipath 00:09:35.681 ************************************ 00:09:35.681 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.681 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:35.681 15:17:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:35.681 15:17:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.681 15:17:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.681 15:17:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.681 ************************************ 00:09:35.681 START TEST nvmf_zcopy 00:09:35.681 ************************************ 00:09:35.681 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:35.939 * Looking for test storage... 00:09:35.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.939 --rc genhtml_branch_coverage=1 00:09:35.939 --rc genhtml_function_coverage=1 00:09:35.939 --rc genhtml_legend=1 00:09:35.939 --rc geninfo_all_blocks=1 00:09:35.939 --rc geninfo_unexecuted_blocks=1 00:09:35.939 00:09:35.939 ' 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.939 --rc genhtml_branch_coverage=1 00:09:35.939 --rc genhtml_function_coverage=1 00:09:35.939 --rc genhtml_legend=1 00:09:35.939 --rc geninfo_all_blocks=1 00:09:35.939 --rc geninfo_unexecuted_blocks=1 00:09:35.939 00:09:35.939 ' 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.939 --rc genhtml_branch_coverage=1 00:09:35.939 --rc genhtml_function_coverage=1 00:09:35.939 --rc genhtml_legend=1 00:09:35.939 --rc geninfo_all_blocks=1 00:09:35.939 --rc geninfo_unexecuted_blocks=1 00:09:35.939 00:09:35.939 ' 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.939 --rc genhtml_branch_coverage=1 00:09:35.939 --rc genhtml_function_coverage=1 00:09:35.939 --rc genhtml_legend=1 00:09:35.939 --rc geninfo_all_blocks=1 00:09:35.939 --rc geninfo_unexecuted_blocks=1 00:09:35.939 00:09:35.939 ' 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:35.939 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:35.940 Cannot find device "nvmf_init_br" 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:35.940 Cannot find device "nvmf_init_br2" 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:35.940 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:36.199 Cannot find device "nvmf_tgt_br" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.199 Cannot find device "nvmf_tgt_br2" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:36.199 Cannot find device "nvmf_init_br" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:36.199 Cannot find device "nvmf_init_br2" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:36.199 Cannot find device "nvmf_tgt_br" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:36.199 Cannot find device "nvmf_tgt_br2" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:36.199 Cannot find device "nvmf_br" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:36.199 Cannot find device "nvmf_init_if" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:36.199 Cannot find device "nvmf_init_if2" 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.199 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:36.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:09:36.458 00:09:36.458 --- 10.0.0.3 ping statistics --- 00:09:36.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.458 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:36.458 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:36.458 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:09:36.458 00:09:36.458 --- 10.0.0.4 ping statistics --- 00:09:36.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.458 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:36.458 00:09:36.458 --- 10.0.0.1 ping statistics --- 00:09:36.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.458 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:36.458 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:36.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:36.458 00:09:36.458 --- 10.0.0.2 ping statistics --- 00:09:36.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.458 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69321 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69321 00:09:36.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 69321 ']' 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.459 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.717 [2024-11-20 15:17:22.961513] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:36.717 [2024-11-20 15:17:22.961603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.717 [2024-11-20 15:17:23.102491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.717 [2024-11-20 15:17:23.183054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.717 [2024-11-20 15:17:23.183113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.717 [2024-11-20 15:17:23.183124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.717 [2024-11-20 15:17:23.183132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.717 [2024-11-20 15:17:23.183139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.717 [2024-11-20 15:17:23.183531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.655 [2024-11-20 15:17:23.925903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.655 [2024-11-20 15:17:23.949984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.655 malloc0 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.655 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:37.655 { 00:09:37.655 "params": { 00:09:37.655 "name": "Nvme$subsystem", 00:09:37.655 "trtype": "$TEST_TRANSPORT", 00:09:37.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.655 "adrfam": "ipv4", 00:09:37.655 "trsvcid": "$NVMF_PORT", 00:09:37.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.655 "hdgst": ${hdgst:-false}, 00:09:37.655 "ddgst": ${ddgst:-false} 00:09:37.655 }, 00:09:37.655 "method": "bdev_nvme_attach_controller" 00:09:37.655 } 00:09:37.655 EOF 00:09:37.655 )") 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:37.655 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:37.655 "params": { 00:09:37.655 "name": "Nvme1", 00:09:37.655 "trtype": "tcp", 00:09:37.655 "traddr": "10.0.0.3", 00:09:37.655 "adrfam": "ipv4", 00:09:37.655 "trsvcid": "4420", 00:09:37.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.655 "hdgst": false, 00:09:37.655 "ddgst": false 00:09:37.655 }, 00:09:37.655 "method": "bdev_nvme_attach_controller" 00:09:37.655 }' 00:09:37.655 [2024-11-20 15:17:24.057488] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:37.655 [2024-11-20 15:17:24.057707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69372 ] 00:09:37.915 [2024-11-20 15:17:24.210077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.915 [2024-11-20 15:17:24.287780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.174 Running I/O for 10 seconds... 00:09:40.046 8114.00 IOPS, 63.39 MiB/s [2024-11-20T15:17:27.904Z] 8173.00 IOPS, 63.85 MiB/s [2024-11-20T15:17:28.842Z] 8171.00 IOPS, 63.84 MiB/s [2024-11-20T15:17:29.799Z] 8173.75 IOPS, 63.86 MiB/s [2024-11-20T15:17:30.735Z] 8161.60 IOPS, 63.76 MiB/s [2024-11-20T15:17:31.672Z] 8144.17 IOPS, 63.63 MiB/s [2024-11-20T15:17:32.607Z] 8123.43 IOPS, 63.46 MiB/s [2024-11-20T15:17:33.629Z] 8118.62 IOPS, 63.43 MiB/s [2024-11-20T15:17:34.565Z] 8124.11 IOPS, 63.47 MiB/s [2024-11-20T15:17:34.565Z] 8130.10 IOPS, 63.52 MiB/s 00:09:48.083 Latency(us) 00:09:48.083 [2024-11-20T15:17:34.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.083 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:48.083 Verification LBA range: start 0x0 length 0x1000 00:09:48.083 Nvme1n1 : 10.01 8133.77 63.55 0.00 0.00 15692.53 2368.77 24108.83 00:09:48.083 [2024-11-20T15:17:34.565Z] =================================================================================================================== 00:09:48.083 [2024-11-20T15:17:34.565Z] Total : 8133.77 63.55 0.00 0.00 15692.53 2368.77 24108.83 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69495 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:48.341 { 00:09:48.341 "params": { 00:09:48.341 "name": "Nvme$subsystem", 00:09:48.341 "trtype": "$TEST_TRANSPORT", 00:09:48.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:48.341 "adrfam": "ipv4", 00:09:48.341 "trsvcid": "$NVMF_PORT", 00:09:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:48.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:48.341 "hdgst": ${hdgst:-false}, 00:09:48.341 "ddgst": ${ddgst:-false} 00:09:48.341 }, 00:09:48.341 "method": "bdev_nvme_attach_controller" 00:09:48.341 } 00:09:48.341 EOF 00:09:48.341 )") 00:09:48.341 [2024-11-20 15:17:34.784224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.341 [2024-11-20 15:17:34.784400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:48.341 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:48.341 [2024-11-20 15:17:34.796185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.341 [2024-11-20 15:17:34.796209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.341 15:17:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:48.341 "params": { 00:09:48.341 "name": "Nvme1", 00:09:48.341 "trtype": "tcp", 00:09:48.341 "traddr": "10.0.0.3", 00:09:48.341 "adrfam": "ipv4", 00:09:48.341 "trsvcid": "4420", 00:09:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:48.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:48.341 "hdgst": false, 00:09:48.341 "ddgst": false 00:09:48.341 }, 00:09:48.341 "method": "bdev_nvme_attach_controller" 00:09:48.341 }' 00:09:48.341 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.341 [2024-11-20 15:17:34.808141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.341 [2024-11-20 15:17:34.808163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.341 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.342 [2024-11-20 15:17:34.820122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.342 [2024-11-20 15:17:34.820150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.601 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.601 [2024-11-20 15:17:34.834431] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:09:48.601 [2024-11-20 15:17:34.834499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69495 ] 00:09:48.601 [2024-11-20 15:17:34.836090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.601 [2024-11-20 15:17:34.836110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.601 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.601 [2024-11-20 15:17:34.852069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.601 [2024-11-20 15:17:34.852090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.601 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.601 [2024-11-20 15:17:34.868072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.601 [2024-11-20 15:17:34.868098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.601 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.601 [2024-11-20 15:17:34.884024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.601 [2024-11-20 15:17:34.884045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.601 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.601 [2024-11-20 15:17:34.900001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.601 [2024-11-20 15:17:34.900024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.601 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.601 [2024-11-20 15:17:34.915977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.601 [2024-11-20 15:17:34.916000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.601 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:34.931954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:34.931975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:34.947936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:34.947956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:34.967909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:34.967931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:34.983889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:34.983911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 2024/11/20 15:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:34.988579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.602 [2024-11-20 15:17:34.999869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:34.999903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:35.015846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:35.015870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:35.031815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:35.031835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:35.047796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:35.047816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:35.063776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:35.063796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 [2024-11-20 15:17:35.067875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.602 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.602 [2024-11-20 15:17:35.079761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-11-20 15:17:35.079787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.861 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.861 [2024-11-20 15:17:35.091757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.861 [2024-11-20 15:17:35.091786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.861 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.861 [2024-11-20 15:17:35.107746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.861 [2024-11-20 15:17:35.107772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.861 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.861 [2024-11-20 15:17:35.123741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.861 [2024-11-20 15:17:35.123766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.861 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.861 [2024-11-20 15:17:35.139749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.861 [2024-11-20 15:17:35.139770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.155742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.155766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.171743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.171768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.187741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.187765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.203737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.203758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.219769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.219799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.235749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.235776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.251749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.251776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.267748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.267776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.283763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.283793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 Running I/O for 5 seconds... 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.303346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.303383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.321191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.321226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.862 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.862 [2024-11-20 15:17:35.339946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.862 [2024-11-20 15:17:35.339983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.121 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.121 [2024-11-20 15:17:35.357326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.121 [2024-11-20 15:17:35.357363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.121 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.121 [2024-11-20 15:17:35.375135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.121 [2024-11-20 15:17:35.375172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.121 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.121 [2024-11-20 15:17:35.393409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.121 [2024-11-20 15:17:35.393446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.121 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.411075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.411109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.429098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.429130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.446730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.446763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.464354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.464388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.482256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.482289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.500082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.500115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.517836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.517870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.535821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.535862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.557653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.557693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.575869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.575909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.122 [2024-11-20 15:17:35.594348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.122 [2024-11-20 15:17:35.594397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.122 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.381 [2024-11-20 15:17:35.612233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.381 [2024-11-20 15:17:35.612269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.381 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.381 [2024-11-20 15:17:35.630104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.381 [2024-11-20 15:17:35.630143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.381 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.381 [2024-11-20 15:17:35.648531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.381 [2024-11-20 15:17:35.648567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.381 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.381 [2024-11-20 15:17:35.666523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.381 [2024-11-20 15:17:35.666558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.381 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.381 [2024-11-20 15:17:35.685014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.381 [2024-11-20 15:17:35.685047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.381 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.703346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.703385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.721068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.721105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.738986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.739023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.757159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.757194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.775641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.775685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.793794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.793828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.812356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.812394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.829584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.829618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.382 [2024-11-20 15:17:35.847234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.382 [2024-11-20 15:17:35.847268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.382 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.641 [2024-11-20 15:17:35.865070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.641 [2024-11-20 15:17:35.865104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.641 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.641 [2024-11-20 15:17:35.882771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.641 [2024-11-20 15:17:35.882806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.641 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.641 [2024-11-20 15:17:35.900829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.641 [2024-11-20 15:17:35.900862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.641 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.641 [2024-11-20 15:17:35.918420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.641 [2024-11-20 15:17:35.918453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.641 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.641 [2024-11-20 15:17:35.936026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.641 [2024-11-20 15:17:35.936060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:35.954614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:35.954650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:35.972444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:35.972478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:35.990484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:35.990517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:36.005577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:36.005612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:36.024507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:36.024542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:36.042117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:36.042151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:36.060585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:36.060620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:36.078934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:36.078969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:36.097133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:36.097167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.642 [2024-11-20 15:17:36.115085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.642 [2024-11-20 15:17:36.115120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.642 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.132722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.132753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.150742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.150775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.165497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.165529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.183120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.183156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.201229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.201264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.219007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.219043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.236601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.236639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.254482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.254518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.272596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.272633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 15949.00 IOPS, 124.60 MiB/s [2024-11-20T15:17:36.385Z] [2024-11-20 15:17:36.290579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.290613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.308350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.308384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.326206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.326240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.343657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.343702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.361622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.361656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:49.903 [2024-11-20 15:17:36.379638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.903 [2024-11-20 15:17:36.379681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.903 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.163 [2024-11-20 15:17:36.397486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.163 [2024-11-20 15:17:36.397521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.163 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.163 [2024-11-20 15:17:36.416123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.163 [2024-11-20 15:17:36.416158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.163 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.163 [2024-11-20 15:17:36.434444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.163 [2024-11-20 15:17:36.434483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.452762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.452799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.470848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.470885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.488652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.488696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.506836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.506873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.524652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.524697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.542836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.542871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.560821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.560853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.578579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.578616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.596609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.596645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.614570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.614606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.164 [2024-11-20 15:17:36.632797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.164 [2024-11-20 15:17:36.632850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.164 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.650276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.650309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.668279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.668313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.686657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.686700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.704748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.704783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.722547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.722582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.740311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.740346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.758204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.758240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.776076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.776116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.793425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.793460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.811079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.811117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.424 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.424 [2024-11-20 15:17:36.828828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.424 [2024-11-20 15:17:36.828862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.425 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.425 [2024-11-20 15:17:36.846922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.425 [2024-11-20 15:17:36.846959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.425 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.425 [2024-11-20 15:17:36.864816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.425 [2024-11-20 15:17:36.864851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.425 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.425 [2024-11-20 15:17:36.882806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.425 [2024-11-20 15:17:36.882840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.425 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.425 [2024-11-20 15:17:36.900426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.425 [2024-11-20 15:17:36.900461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.425 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.684 [2024-11-20 15:17:36.918180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.684 [2024-11-20 15:17:36.918213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:36.936817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:36.936851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:36.954741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:36.954776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:36.973002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:36.973037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:36.990822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:36.990856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.008594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.008630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.027200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.027236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.044993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.045029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.062927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.062960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.081029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.081065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.099570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.099608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.117131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.117166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.135328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.135365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.685 [2024-11-20 15:17:37.153628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.685 [2024-11-20 15:17:37.153666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.685 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.945 [2024-11-20 15:17:37.171540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.945 [2024-11-20 15:17:37.171576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.945 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.945 [2024-11-20 15:17:37.189870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.945 [2024-11-20 15:17:37.189906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.945 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.945 [2024-11-20 15:17:37.208240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.945 [2024-11-20 15:17:37.208276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.945 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.945 [2024-11-20 15:17:37.225884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.945 [2024-11-20 15:17:37.225918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.945 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.945 [2024-11-20 15:17:37.244343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.945 [2024-11-20 15:17:37.244382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.945 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.945 [2024-11-20 15:17:37.262525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.945 [2024-11-20 15:17:37.262569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.945 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.945 [2024-11-20 15:17:37.280016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.945 [2024-11-20 15:17:37.280055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.945 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.945 15999.50 IOPS, 125.00 MiB/s [2024-11-20T15:17:37.427Z] [2024-11-20 15:17:37.298061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.945 [2024-11-20 15:17:37.298105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.946 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.946 [2024-11-20 15:17:37.312909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.946 [2024-11-20 15:17:37.312950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.946 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.946 [2024-11-20 15:17:37.330413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.946 [2024-11-20 15:17:37.330449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.946 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.946 [2024-11-20 15:17:37.347926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.946 [2024-11-20 15:17:37.347960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.946 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.946 [2024-11-20 15:17:37.366176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.946 [2024-11-20 15:17:37.366211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.946 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.946 [2024-11-20 15:17:37.384033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.946 [2024-11-20 15:17:37.384068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.946 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.946 [2024-11-20 15:17:37.402291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.946 [2024-11-20 15:17:37.402326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.946 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:50.946 [2024-11-20 15:17:37.420371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.946 [2024-11-20 15:17:37.420407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.946 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.438318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.438355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.456488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.456525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.474037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.474074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.491582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.491619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.509873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.509908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.527502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.527539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.545428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.545464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.563525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.563560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.581459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.204 [2024-11-20 15:17:37.581494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.204 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.204 [2024-11-20 15:17:37.599906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.205 [2024-11-20 15:17:37.599942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.205 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.205 [2024-11-20 15:17:37.617779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.205 [2024-11-20 15:17:37.617811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.205 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.205 [2024-11-20 15:17:37.635324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.205 [2024-11-20 15:17:37.635361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.205 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.205 [2024-11-20 15:17:37.653170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.205 [2024-11-20 15:17:37.653208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.205 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.205 [2024-11-20 15:17:37.671088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.205 [2024-11-20 15:17:37.671124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.205 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.689038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.689075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.707227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.707261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.725250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.725287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.742412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.742448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.760306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.760341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.778357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.778393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.795733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.795767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.813606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.813642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.831412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.831448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.464 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.464 [2024-11-20 15:17:37.849491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.464 [2024-11-20 15:17:37.849527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.465 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.465 [2024-11-20 15:17:37.868051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.465 [2024-11-20 15:17:37.868088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.465 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.465 [2024-11-20 15:17:37.886483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.465 [2024-11-20 15:17:37.886517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.465 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.465 [2024-11-20 15:17:37.904290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.465 [2024-11-20 15:17:37.904330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.465 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.465 [2024-11-20 15:17:37.921669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.465 [2024-11-20 15:17:37.921713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.465 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.465 [2024-11-20 15:17:37.939762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.465 [2024-11-20 15:17:37.939804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.465 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:37.957443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:37.957480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:37.975275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:37.975311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:37.993288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:37.993323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:38.011170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:38.011206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:38.028974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:38.029010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:38.047107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:38.047145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:38.064729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:38.064770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:38.082531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:38.082567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:38.100893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:38.100932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.724 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.724 [2024-11-20 15:17:38.118837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.724 [2024-11-20 15:17:38.118872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.725 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.725 [2024-11-20 15:17:38.136684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.725 [2024-11-20 15:17:38.136717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.725 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.725 [2024-11-20 15:17:38.154398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.725 [2024-11-20 15:17:38.154430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.725 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.725 [2024-11-20 15:17:38.172108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.725 [2024-11-20 15:17:38.172144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.725 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.725 [2024-11-20 15:17:38.187039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.725 [2024-11-20 15:17:38.187183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.725 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.206969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.207109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.225309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.225439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.243377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.243507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.261074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.261223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.279079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.279218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 15991.33 IOPS, 124.93 MiB/s [2024-11-20T15:17:38.466Z] [2024-11-20 15:17:38.297498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.297641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.315274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.315403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.333032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.333065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.350721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.350753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.368421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.368457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.386395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.386655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.404303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.404443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.422305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.422434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.440162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.984 [2024-11-20 15:17:38.440195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.984 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:51.984 [2024-11-20 15:17:38.458455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.985 [2024-11-20 15:17:38.458487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.985 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.476912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.477124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.494972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.495011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.513011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.513049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.531377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.531412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.546317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.546349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.562530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.562563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.573345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.573379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.591317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.591350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.609690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.609721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.627049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.627081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.644386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.644418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.662052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.662089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.680347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.680387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.698424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.698461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.245 [2024-11-20 15:17:38.716579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.245 [2024-11-20 15:17:38.716616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.245 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.734960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.735015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.753505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.753551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.771492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.771529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.788956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.788991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.806898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.806933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.825146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.825179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.840155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.840189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.855812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.855861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.874046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.874095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.892171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.892207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.910271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.910306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.927410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.927444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.945033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.505 [2024-11-20 15:17:38.945066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.505 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.505 [2024-11-20 15:17:38.962826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.506 [2024-11-20 15:17:38.962854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.506 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.506 [2024-11-20 15:17:38.981371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.506 [2024-11-20 15:17:38.981404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.506 2024/11/20 15:17:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.765 [2024-11-20 15:17:38.999588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.765 [2024-11-20 15:17:38.999620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.765 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.765 [2024-11-20 15:17:39.017339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.017372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.032134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.032165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.050165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.050195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.067799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.067831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.082510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.082544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.098199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.098230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.112887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.112920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.128700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.128735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.146376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.146424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.164399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.164439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.182360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.182395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.200037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.200071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.214864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.214898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:52.766 [2024-11-20 15:17:39.232453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.766 [2024-11-20 15:17:39.232489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.766 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.247337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.247388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.263141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.263180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.281340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.281377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 15991.75 IOPS, 124.94 MiB/s [2024-11-20T15:17:39.507Z] 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.299147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.299185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.316952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.316986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.334683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.334721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.352744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.352780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.367805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.367839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.386854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.386889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.405311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.405346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.423547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.423582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.025 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.025 [2024-11-20 15:17:39.441104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.025 [2024-11-20 15:17:39.441137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.026 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.026 [2024-11-20 15:17:39.459143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.026 [2024-11-20 15:17:39.459179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.026 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.026 [2024-11-20 15:17:39.477022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.026 [2024-11-20 15:17:39.477057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.026 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.026 [2024-11-20 15:17:39.495502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.026 [2024-11-20 15:17:39.495540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.026 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.284 [2024-11-20 15:17:39.510414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.284 [2024-11-20 15:17:39.510449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.284 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.284 [2024-11-20 15:17:39.527973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.284 [2024-11-20 15:17:39.528009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.284 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.284 [2024-11-20 15:17:39.545437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.284 [2024-11-20 15:17:39.545473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.284 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.284 [2024-11-20 15:17:39.560556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.284 [2024-11-20 15:17:39.560592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.284 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.284 [2024-11-20 15:17:39.580263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.284 [2024-11-20 15:17:39.580297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.284 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.284 [2024-11-20 15:17:39.598536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.284 [2024-11-20 15:17:39.598573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.284 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.284 [2024-11-20 15:17:39.616558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.284 [2024-11-20 15:17:39.616597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.284 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.284 [2024-11-20 15:17:39.632243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.284 [2024-11-20 15:17:39.632288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.284 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.285 [2024-11-20 15:17:39.652260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.285 [2024-11-20 15:17:39.652308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.285 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.285 [2024-11-20 15:17:39.663367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.285 [2024-11-20 15:17:39.663410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.285 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.285 [2024-11-20 15:17:39.681734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.285 [2024-11-20 15:17:39.681775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.285 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.285 [2024-11-20 15:17:39.697371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.285 [2024-11-20 15:17:39.697417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.285 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.285 [2024-11-20 15:17:39.717103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.285 [2024-11-20 15:17:39.717146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.285 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.285 [2024-11-20 15:17:39.732075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.285 [2024-11-20 15:17:39.732111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.285 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.285 [2024-11-20 15:17:39.747608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.285 [2024-11-20 15:17:39.747642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.285 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.285 [2024-11-20 15:17:39.762610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.285 [2024-11-20 15:17:39.762644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.285 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.778438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.778472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.543 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.793006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.793038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.543 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.808550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.808585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.543 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.823455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.823490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.543 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.843102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.843136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.543 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.858151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.858186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.543 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.877541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.877589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.543 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.892450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.892484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.543 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.543 [2024-11-20 15:17:39.907077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.543 [2024-11-20 15:17:39.907109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.544 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.544 [2024-11-20 15:17:39.923133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.544 [2024-11-20 15:17:39.923169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.544 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.544 [2024-11-20 15:17:39.933908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.544 [2024-11-20 15:17:39.933941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.544 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.544 [2024-11-20 15:17:39.949153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.544 [2024-11-20 15:17:39.949186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.544 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.544 [2024-11-20 15:17:39.963628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.544 [2024-11-20 15:17:39.963665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.544 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.544 [2024-11-20 15:17:39.981294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.544 [2024-11-20 15:17:39.981347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.544 2024/11/20 15:17:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.544 [2024-11-20 15:17:39.999555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.544 [2024-11-20 15:17:39.999595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.544 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.544 [2024-11-20 15:17:40.014742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.544 [2024-11-20 15:17:40.014777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.544 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.802 [2024-11-20 15:17:40.030516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.802 [2024-11-20 15:17:40.030551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.802 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.802 [2024-11-20 15:17:40.045599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.802 [2024-11-20 15:17:40.045650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.802 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.802 [2024-11-20 15:17:40.065120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.802 [2024-11-20 15:17:40.065167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.802 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.802 [2024-11-20 15:17:40.080505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.802 [2024-11-20 15:17:40.080543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.802 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.802 [2024-11-20 15:17:40.099922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.802 [2024-11-20 15:17:40.099958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.802 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.802 [2024-11-20 15:17:40.114669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.802 [2024-11-20 15:17:40.114711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.802 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.802 [2024-11-20 15:17:40.125984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.802 [2024-11-20 15:17:40.126017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.802 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.802 [2024-11-20 15:17:40.141067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.802 [2024-11-20 15:17:40.141099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.802 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.156950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.156982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.168185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.168216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.186350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.186409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.204495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.204538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.222551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.222588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.237388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.237422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.248345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.248380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.263403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.263439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:53.803 [2024-11-20 15:17:40.279344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.803 [2024-11-20 15:17:40.279380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.803 15967.40 IOPS, 124.75 MiB/s [2024-11-20T15:17:40.285Z] 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 00:09:54.062 Latency(us) 00:09:54.062 [2024-11-20T15:17:40.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.062 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:54.062 Nvme1n1 : 5.01 15968.70 124.76 0.00 0.00 8007.99 3197.84 15265.41 00:09:54.062 [2024-11-20T15:17:40.544Z] =================================================================================================================== 00:09:54.062 [2024-11-20T15:17:40.544Z] Total : 15968.70 124.76 0.00 0.00 8007.99 3197.84 15265.41 00:09:54.062 [2024-11-20 15:17:40.289469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.289501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.301443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.301472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.317425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.317453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.329409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.329436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.345379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.345406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.361353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.361382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.377359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.377390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.389318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.389345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.405292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.405324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.421268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.421293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.437243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.437272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.453222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.453248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.469197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.469220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.481202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.481224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.497163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.497185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.513145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.513176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.529115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.062 [2024-11-20 15:17:40.529137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.062 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.062 [2024-11-20 15:17:40.541098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.063 [2024-11-20 15:17:40.541116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.322 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.322 [2024-11-20 15:17:40.557075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.322 [2024-11-20 15:17:40.557099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.322 2024/11/20 15:17:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:54.322 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69495) - No such process 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69495 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.322 delay0 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.322 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:54.322 [2024-11-20 15:17:40.784295] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:00.917 Initializing NVMe Controllers 00:10:00.917 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:00.917 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:00.917 Initialization complete. Launching workers. 00:10:00.917 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 119 00:10:00.917 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 406, failed to submit 33 00:10:00.917 success 237, unsuccessful 169, failed 0 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.917 rmmod nvme_tcp 00:10:00.917 rmmod nvme_fabrics 00:10:00.917 rmmod nvme_keyring 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69321 ']' 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69321 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 69321 ']' 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 69321 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.917 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69321 00:10:00.917 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:00.917 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:00.917 killing process with pid 69321 00:10:00.917 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69321' 00:10:00.917 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 69321 00:10:00.917 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 69321 00:10:00.917 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:00.918 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:01.178 00:10:01.178 real 0m25.494s 00:10:01.178 user 0m40.028s 00:10:01.178 sys 0m8.442s 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.178 ************************************ 00:10:01.178 END TEST nvmf_zcopy 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.178 ************************************ 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.178 ************************************ 00:10:01.178 START TEST nvmf_nmic 00:10:01.178 ************************************ 00:10:01.178 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:01.438 * Looking for test storage... 00:10:01.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.438 --rc genhtml_branch_coverage=1 00:10:01.438 --rc genhtml_function_coverage=1 00:10:01.438 --rc genhtml_legend=1 00:10:01.438 --rc geninfo_all_blocks=1 00:10:01.438 --rc geninfo_unexecuted_blocks=1 00:10:01.438 00:10:01.438 ' 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.438 --rc genhtml_branch_coverage=1 00:10:01.438 --rc genhtml_function_coverage=1 00:10:01.438 --rc genhtml_legend=1 00:10:01.438 --rc geninfo_all_blocks=1 00:10:01.438 --rc geninfo_unexecuted_blocks=1 00:10:01.438 00:10:01.438 ' 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.438 --rc genhtml_branch_coverage=1 00:10:01.438 --rc genhtml_function_coverage=1 00:10:01.438 --rc genhtml_legend=1 00:10:01.438 --rc geninfo_all_blocks=1 00:10:01.438 --rc geninfo_unexecuted_blocks=1 00:10:01.438 00:10:01.438 ' 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.438 --rc genhtml_branch_coverage=1 00:10:01.438 --rc genhtml_function_coverage=1 00:10:01.438 --rc genhtml_legend=1 00:10:01.438 --rc geninfo_all_blocks=1 00:10:01.438 --rc geninfo_unexecuted_blocks=1 00:10:01.438 00:10:01.438 ' 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:10:01.438 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.439 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:01.439 Cannot find device "nvmf_init_br" 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:01.439 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:01.699 Cannot find device "nvmf_init_br2" 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:01.699 Cannot find device "nvmf_tgt_br" 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.699 Cannot find device "nvmf_tgt_br2" 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:01.699 Cannot find device "nvmf_init_br" 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:01.699 Cannot find device "nvmf_init_br2" 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:01.699 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:01.699 Cannot find device "nvmf_tgt_br" 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:01.699 Cannot find device "nvmf_tgt_br2" 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:01.699 Cannot find device "nvmf_br" 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:01.699 Cannot find device "nvmf_init_if" 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:01.699 Cannot find device "nvmf_init_if2" 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:01.699 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:01.958 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:01.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:01.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:10:01.959 00:10:01.959 --- 10.0.0.3 ping statistics --- 00:10:01.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.959 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:01.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:01.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:10:01.959 00:10:01.959 --- 10.0.0.4 ping statistics --- 00:10:01.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.959 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:01.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:01.959 00:10:01.959 --- 10.0.0.1 ping statistics --- 00:10:01.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.959 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:01.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:01.959 00:10:01.959 --- 10.0.0.2 ping statistics --- 00:10:01.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.959 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69876 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69876 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69876 ']' 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.959 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.218 [2024-11-20 15:17:48.464654] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:02.218 [2024-11-20 15:17:48.464759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.218 [2024-11-20 15:17:48.603902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.218 [2024-11-20 15:17:48.689445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.218 [2024-11-20 15:17:48.689504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.218 [2024-11-20 15:17:48.689516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.218 [2024-11-20 15:17:48.689526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.218 [2024-11-20 15:17:48.689534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.218 [2024-11-20 15:17:48.690968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.218 [2024-11-20 15:17:48.691111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.218 [2024-11-20 15:17:48.691154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.218 [2024-11-20 15:17:48.691154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 [2024-11-20 15:17:49.473310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 Malloc0 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 [2024-11-20 15:17:49.553090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:03.154 test case1: single bdev can't be used in multiple subsystems 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 [2024-11-20 15:17:49.580968] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:03.154 [2024-11-20 15:17:49.581007] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:03.154 [2024-11-20 15:17:49.581019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 2024/11/20 15:17:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.154 request: 00:10:03.154 { 00:10:03.154 "method": "nvmf_subsystem_add_ns", 00:10:03.154 "params": { 00:10:03.154 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:03.154 "namespace": { 00:10:03.154 "bdev_name": "Malloc0", 00:10:03.154 "no_auto_visible": false 00:10:03.154 } 00:10:03.154 } 00:10:03.154 } 00:10:03.154 Got JSON-RPC error response 00:10:03.154 GoRPCClient: error on JSON-RPC call 00:10:03.154 Adding namespace failed - expected result. 00:10:03.154 test case2: host connect to nvmf target in multiple paths 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 [2024-11-20 15:17:49.598037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:03.413 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:03.672 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:03.672 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:03.672 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.672 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:03.672 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:05.577 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:05.577 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.577 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:05.577 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:05.577 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.577 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:05.577 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:05.577 [global] 00:10:05.577 thread=1 00:10:05.577 invalidate=1 00:10:05.577 rw=write 00:10:05.577 time_based=1 00:10:05.577 runtime=1 00:10:05.577 ioengine=libaio 00:10:05.577 direct=1 00:10:05.577 bs=4096 00:10:05.577 iodepth=1 00:10:05.577 norandommap=0 00:10:05.577 numjobs=1 00:10:05.577 00:10:05.577 verify_dump=1 00:10:05.577 verify_backlog=512 00:10:05.577 verify_state_save=0 00:10:05.577 do_verify=1 00:10:05.577 verify=crc32c-intel 00:10:05.577 [job0] 00:10:05.577 filename=/dev/nvme0n1 00:10:05.835 Could not set queue depth (nvme0n1) 00:10:05.835 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.835 fio-3.35 00:10:05.835 Starting 1 thread 00:10:07.214 00:10:07.214 job0: (groupid=0, jobs=1): err= 0: pid=69991: Wed Nov 20 15:17:53 2024 00:10:07.214 read: IOPS=4456, BW=17.4MiB/s (18.3MB/s)(17.4MiB/1001msec) 00:10:07.214 slat (nsec): min=8145, max=36774, avg=9561.46, stdev=2461.28 00:10:07.214 clat (usec): min=90, max=876, avg=108.64, stdev=16.08 00:10:07.214 lat (usec): min=99, max=891, avg=118.20, stdev=16.74 00:10:07.214 clat percentiles (usec): 00:10:07.214 | 1.00th=[ 94], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 101], 00:10:07.214 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 109], 00:10:07.214 | 70.00th=[ 111], 80.00th=[ 115], 90.00th=[ 123], 95.00th=[ 133], 00:10:07.214 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 186], 99.95th=[ 206], 00:10:07.214 | 99.99th=[ 881] 00:10:07.214 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:07.214 slat (usec): min=11, max=105, avg=14.75, stdev= 6.35 00:10:07.214 clat (usec): min=64, max=6832, avg=85.79, stdev=194.65 00:10:07.214 lat (usec): min=76, max=6851, avg=100.54, stdev=195.13 00:10:07.214 clat percentiles (usec): 00:10:07.214 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:10:07.214 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 77], 00:10:07.214 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 95], 00:10:07.214 | 99.00th=[ 115], 99.50th=[ 133], 99.90th=[ 4047], 99.95th=[ 4490], 00:10:07.214 | 99.99th=[ 6849] 00:10:07.214 bw ( KiB/s): min=20480, max=20480, per=100.00%, avg=20480.00, stdev= 0.00, samples=1 00:10:07.214 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:07.214 lat (usec) : 100=57.69%, 250=42.15%, 500=0.04%, 1000=0.01% 00:10:07.214 lat (msec) : 4=0.04%, 10=0.06% 00:10:07.214 cpu : usr=2.60%, sys=8.30%, ctx=9069, majf=0, minf=5 00:10:07.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.214 issued rwts: total=4461,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.214 00:10:07.214 Run status group 0 (all jobs): 00:10:07.214 READ: bw=17.4MiB/s (18.3MB/s), 17.4MiB/s-17.4MiB/s (18.3MB/s-18.3MB/s), io=17.4MiB (18.3MB), run=1001-1001msec 00:10:07.214 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:10:07.214 00:10:07.214 Disk stats (read/write): 00:10:07.214 nvme0n1: ios=4043/4096, merge=0/0, ticks=461/353, in_queue=814, util=90.07% 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.214 rmmod nvme_tcp 00:10:07.214 rmmod nvme_fabrics 00:10:07.214 rmmod nvme_keyring 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69876 ']' 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69876 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69876 ']' 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69876 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69876 00:10:07.214 killing process with pid 69876 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69876' 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69876 00:10:07.214 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69876 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:07.473 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:07.732 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:07.732 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.732 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:07.992 00:10:07.992 real 0m6.606s 00:10:07.992 user 0m20.438s 00:10:07.992 sys 0m2.006s 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.992 ************************************ 00:10:07.992 END TEST nvmf_nmic 00:10:07.992 ************************************ 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.992 ************************************ 00:10:07.992 START TEST nvmf_fio_target 00:10:07.992 ************************************ 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:07.992 * Looking for test storage... 00:10:07.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.992 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.252 --rc genhtml_branch_coverage=1 00:10:08.252 --rc genhtml_function_coverage=1 00:10:08.252 --rc genhtml_legend=1 00:10:08.252 --rc geninfo_all_blocks=1 00:10:08.252 --rc geninfo_unexecuted_blocks=1 00:10:08.252 00:10:08.252 ' 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.252 --rc genhtml_branch_coverage=1 00:10:08.252 --rc genhtml_function_coverage=1 00:10:08.252 --rc genhtml_legend=1 00:10:08.252 --rc geninfo_all_blocks=1 00:10:08.252 --rc geninfo_unexecuted_blocks=1 00:10:08.252 00:10:08.252 ' 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.252 --rc genhtml_branch_coverage=1 00:10:08.252 --rc genhtml_function_coverage=1 00:10:08.252 --rc genhtml_legend=1 00:10:08.252 --rc geninfo_all_blocks=1 00:10:08.252 --rc geninfo_unexecuted_blocks=1 00:10:08.252 00:10:08.252 ' 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.252 --rc genhtml_branch_coverage=1 00:10:08.252 --rc genhtml_function_coverage=1 00:10:08.252 --rc genhtml_legend=1 00:10:08.252 --rc geninfo_all_blocks=1 00:10:08.252 --rc geninfo_unexecuted_blocks=1 00:10:08.252 00:10:08.252 ' 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:08.252 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.253 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:08.253 Cannot find device "nvmf_init_br" 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:08.253 Cannot find device "nvmf_init_br2" 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:08.253 Cannot find device "nvmf_tgt_br" 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.253 Cannot find device "nvmf_tgt_br2" 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:08.253 Cannot find device "nvmf_init_br" 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:08.253 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:08.513 Cannot find device "nvmf_init_br2" 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:08.513 Cannot find device "nvmf_tgt_br" 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:08.513 Cannot find device "nvmf_tgt_br2" 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:08.513 Cannot find device "nvmf_br" 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:08.513 Cannot find device "nvmf_init_if" 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:08.513 Cannot find device "nvmf_init_if2" 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.513 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.773 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:08.773 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:08.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:10:08.773 00:10:08.773 --- 10.0.0.3 ping statistics --- 00:10:08.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.773 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:08.773 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:08.773 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:08.773 00:10:08.773 --- 10.0.0.4 ping statistics --- 00:10:08.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.773 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:08.773 00:10:08.773 --- 10.0.0.1 ping statistics --- 00:10:08.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.773 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:08.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:10:08.773 00:10:08.773 --- 10.0.0.2 ping statistics --- 00:10:08.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.773 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.773 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70230 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70230 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 70230 ']' 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.774 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.032 [2024-11-20 15:17:55.299778] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:09.032 [2024-11-20 15:17:55.299875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.032 [2024-11-20 15:17:55.454537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.291 [2024-11-20 15:17:55.531707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.291 [2024-11-20 15:17:55.531767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.291 [2024-11-20 15:17:55.531778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.291 [2024-11-20 15:17:55.531787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.291 [2024-11-20 15:17:55.531794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.291 [2024-11-20 15:17:55.533140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.291 [2024-11-20 15:17:55.533424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.291 [2024-11-20 15:17:55.534378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.291 [2024-11-20 15:17:55.534379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.859 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.859 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:09.859 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:09.859 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:09.859 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.859 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.859 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:10.118 [2024-11-20 15:17:56.506298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.118 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.378 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:10.378 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.636 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:10.636 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.894 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:10.894 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.462 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:11.462 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:11.462 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.720 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:11.720 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.979 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:11.979 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.237 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:12.237 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:12.496 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.755 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:12.755 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.013 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:13.013 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.272 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:13.531 [2024-11-20 15:17:59.814524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:13.531 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:13.788 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:13.788 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:14.093 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:14.093 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.093 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.093 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:14.093 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:14.093 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.010 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.010 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.010 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.010 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:16.010 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.010 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:16.010 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:16.268 [global] 00:10:16.268 thread=1 00:10:16.268 invalidate=1 00:10:16.268 rw=write 00:10:16.268 time_based=1 00:10:16.268 runtime=1 00:10:16.268 ioengine=libaio 00:10:16.268 direct=1 00:10:16.268 bs=4096 00:10:16.268 iodepth=1 00:10:16.268 norandommap=0 00:10:16.268 numjobs=1 00:10:16.268 00:10:16.268 verify_dump=1 00:10:16.268 verify_backlog=512 00:10:16.268 verify_state_save=0 00:10:16.268 do_verify=1 00:10:16.268 verify=crc32c-intel 00:10:16.268 [job0] 00:10:16.268 filename=/dev/nvme0n1 00:10:16.268 [job1] 00:10:16.268 filename=/dev/nvme0n2 00:10:16.268 [job2] 00:10:16.268 filename=/dev/nvme0n3 00:10:16.268 [job3] 00:10:16.268 filename=/dev/nvme0n4 00:10:16.268 Could not set queue depth (nvme0n1) 00:10:16.268 Could not set queue depth (nvme0n2) 00:10:16.268 Could not set queue depth (nvme0n3) 00:10:16.268 Could not set queue depth (nvme0n4) 00:10:16.268 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.268 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.268 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.268 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.268 fio-3.35 00:10:16.268 Starting 4 threads 00:10:17.675 00:10:17.675 job0: (groupid=0, jobs=1): err= 0: pid=70520: Wed Nov 20 15:18:03 2024 00:10:17.675 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:17.675 slat (nsec): min=6128, max=51796, avg=9608.40, stdev=4004.20 00:10:17.675 clat (usec): min=144, max=834, avg=257.73, stdev=48.76 00:10:17.675 lat (usec): min=151, max=841, avg=267.34, stdev=48.58 00:10:17.675 clat percentiles (usec): 00:10:17.675 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:10:17.675 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:17.675 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 322], 00:10:17.675 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 652], 99.95th=[ 660], 00:10:17.675 | 99.99th=[ 832] 00:10:17.675 write: IOPS=2184, BW=8739KiB/s (8949kB/s)(8748KiB/1001msec); 0 zone resets 00:10:17.675 slat (nsec): min=7370, max=74796, avg=14819.83, stdev=5839.55 00:10:17.675 clat (usec): min=89, max=1782, avg=189.90, stdev=54.32 00:10:17.675 lat (usec): min=104, max=1795, avg=204.72, stdev=54.67 00:10:17.675 clat percentiles (usec): 00:10:17.675 | 1.00th=[ 100], 5.00th=[ 114], 10.00th=[ 143], 20.00th=[ 165], 00:10:17.675 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:10:17.675 | 70.00th=[ 200], 80.00th=[ 212], 90.00th=[ 239], 95.00th=[ 273], 00:10:17.675 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 453], 99.95th=[ 553], 00:10:17.675 | 99.99th=[ 1778] 00:10:17.675 bw ( KiB/s): min= 8208, max= 8208, per=17.91%, avg=8208.00, stdev= 0.00, samples=1 00:10:17.675 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:10:17.675 lat (usec) : 100=0.61%, 250=72.30%, 500=26.54%, 750=0.50%, 1000=0.02% 00:10:17.675 lat (msec) : 2=0.02% 00:10:17.675 cpu : usr=1.40%, sys=4.20%, ctx=4235, majf=0, minf=9 00:10:17.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.675 issued rwts: total=2048,2187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.675 job1: (groupid=0, jobs=1): err= 0: pid=70521: Wed Nov 20 15:18:03 2024 00:10:17.675 read: IOPS=3228, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec) 00:10:17.675 slat (nsec): min=8050, max=28395, avg=9157.75, stdev=1769.95 00:10:17.675 clat (usec): min=121, max=469, avg=151.62, stdev=17.30 00:10:17.675 lat (usec): min=129, max=479, avg=160.78, stdev=17.81 00:10:17.675 clat percentiles (usec): 00:10:17.675 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 139], 00:10:17.675 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:10:17.675 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 186], 00:10:17.675 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 227], 99.95th=[ 388], 00:10:17.675 | 99.99th=[ 469] 00:10:17.675 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:17.675 slat (nsec): min=11646, max=97757, avg=15100.18, stdev=6958.42 00:10:17.675 clat (usec): min=79, max=378, avg=117.01, stdev=16.00 00:10:17.675 lat (usec): min=93, max=398, avg=132.11, stdev=19.44 00:10:17.675 clat percentiles (usec): 00:10:17.675 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 105], 00:10:17.675 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 117], 00:10:17.675 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 139], 95.00th=[ 147], 00:10:17.675 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 194], 99.95th=[ 314], 00:10:17.675 | 99.99th=[ 379] 00:10:17.675 bw ( KiB/s): min=13512, max=13512, per=29.49%, avg=13512.00, stdev= 0.00, samples=1 00:10:17.675 iops : min= 3378, max= 3378, avg=3378.00, stdev= 0.00, samples=1 00:10:17.675 lat (usec) : 100=3.58%, 250=96.35%, 500=0.07% 00:10:17.675 cpu : usr=1.50%, sys=6.70%, ctx=6816, majf=0, minf=7 00:10:17.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.675 issued rwts: total=3232,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.675 job2: (groupid=0, jobs=1): err= 0: pid=70522: Wed Nov 20 15:18:03 2024 00:10:17.675 read: IOPS=3223, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec) 00:10:17.675 slat (nsec): min=8238, max=32998, avg=10951.61, stdev=3256.18 00:10:17.675 clat (usec): min=113, max=691, avg=149.95, stdev=14.52 00:10:17.675 lat (usec): min=129, max=707, avg=160.90, stdev=15.26 00:10:17.675 clat percentiles (usec): 00:10:17.675 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 141], 00:10:17.675 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:10:17.675 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 169], 00:10:17.675 | 99.00th=[ 184], 99.50th=[ 186], 99.90th=[ 198], 99.95th=[ 208], 00:10:17.675 | 99.99th=[ 693] 00:10:17.675 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:17.675 slat (nsec): min=11762, max=90027, avg=14531.39, stdev=3884.21 00:10:17.675 clat (usec): min=71, max=175, avg=117.54, stdev=10.87 00:10:17.675 lat (usec): min=104, max=245, avg=132.08, stdev=11.96 00:10:17.675 clat percentiles (usec): 00:10:17.675 | 1.00th=[ 99], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 110], 00:10:17.675 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 119], 00:10:17.675 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 139], 00:10:17.675 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 176], 00:10:17.675 | 99.99th=[ 176] 00:10:17.675 bw ( KiB/s): min=15144, max=15144, per=33.05%, avg=15144.00, stdev= 0.00, samples=1 00:10:17.675 iops : min= 3786, max= 3786, avg=3786.00, stdev= 0.00, samples=1 00:10:17.675 lat (usec) : 100=0.88%, 250=99.10%, 750=0.01% 00:10:17.675 cpu : usr=2.20%, sys=6.40%, ctx=6818, majf=0, minf=9 00:10:17.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.675 issued rwts: total=3227,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.675 job3: (groupid=0, jobs=1): err= 0: pid=70523: Wed Nov 20 15:18:03 2024 00:10:17.675 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:17.676 slat (nsec): min=6167, max=65288, avg=9643.90, stdev=4304.63 00:10:17.676 clat (usec): min=144, max=840, avg=261.56, stdev=52.12 00:10:17.676 lat (usec): min=152, max=849, avg=271.20, stdev=52.71 00:10:17.676 clat percentiles (usec): 00:10:17.676 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:10:17.676 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 255], 00:10:17.676 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 334], 00:10:17.676 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 652], 99.95th=[ 652], 00:10:17.676 | 99.99th=[ 840] 00:10:17.676 write: IOPS=2108, BW=8436KiB/s (8638kB/s)(8444KiB/1001msec); 0 zone resets 00:10:17.676 slat (usec): min=7, max=119, avg=15.44, stdev= 6.80 00:10:17.676 clat (usec): min=71, max=3894, avg=192.98, stdev=119.78 00:10:17.676 lat (usec): min=85, max=3915, avg=208.41, stdev=120.40 00:10:17.676 clat percentiles (usec): 00:10:17.676 | 1.00th=[ 108], 5.00th=[ 125], 10.00th=[ 149], 20.00th=[ 165], 00:10:17.676 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:10:17.676 | 70.00th=[ 200], 80.00th=[ 212], 90.00th=[ 237], 95.00th=[ 260], 00:10:17.676 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 1713], 99.95th=[ 3523], 00:10:17.676 | 99.99th=[ 3884] 00:10:17.676 bw ( KiB/s): min= 8208, max= 8208, per=17.91%, avg=8208.00, stdev= 0.00, samples=1 00:10:17.676 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:10:17.676 lat (usec) : 100=0.31%, 250=71.65%, 500=27.36%, 750=0.58%, 1000=0.02% 00:10:17.676 lat (msec) : 2=0.02%, 4=0.05% 00:10:17.676 cpu : usr=1.50%, sys=4.20%, ctx=4161, majf=0, minf=23 00:10:17.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.676 issued rwts: total=2048,2111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.676 00:10:17.676 Run status group 0 (all jobs): 00:10:17.676 READ: bw=41.2MiB/s (43.2MB/s), 8184KiB/s-12.6MiB/s (8380kB/s-13.2MB/s), io=41.2MiB (43.2MB), run=1001-1001msec 00:10:17.676 WRITE: bw=44.7MiB/s (46.9MB/s), 8436KiB/s-14.0MiB/s (8638kB/s-14.7MB/s), io=44.8MiB (47.0MB), run=1001-1001msec 00:10:17.676 00:10:17.676 Disk stats (read/write): 00:10:17.676 nvme0n1: ios=1676/2048, merge=0/0, ticks=436/386, in_queue=822, util=88.88% 00:10:17.676 nvme0n2: ios=2833/3072, merge=0/0, ticks=444/378, in_queue=822, util=88.88% 00:10:17.676 nvme0n3: ios=2829/3072, merge=0/0, ticks=442/379, in_queue=821, util=89.81% 00:10:17.676 nvme0n4: ios=1577/2048, merge=0/0, ticks=408/385, in_queue=793, util=89.45% 00:10:17.676 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:17.676 [global] 00:10:17.676 thread=1 00:10:17.676 invalidate=1 00:10:17.676 rw=randwrite 00:10:17.676 time_based=1 00:10:17.676 runtime=1 00:10:17.676 ioengine=libaio 00:10:17.676 direct=1 00:10:17.676 bs=4096 00:10:17.676 iodepth=1 00:10:17.676 norandommap=0 00:10:17.676 numjobs=1 00:10:17.676 00:10:17.676 verify_dump=1 00:10:17.676 verify_backlog=512 00:10:17.676 verify_state_save=0 00:10:17.676 do_verify=1 00:10:17.676 verify=crc32c-intel 00:10:17.676 [job0] 00:10:17.676 filename=/dev/nvme0n1 00:10:17.676 [job1] 00:10:17.676 filename=/dev/nvme0n2 00:10:17.676 [job2] 00:10:17.676 filename=/dev/nvme0n3 00:10:17.676 [job3] 00:10:17.676 filename=/dev/nvme0n4 00:10:17.676 Could not set queue depth (nvme0n1) 00:10:17.676 Could not set queue depth (nvme0n2) 00:10:17.676 Could not set queue depth (nvme0n3) 00:10:17.676 Could not set queue depth (nvme0n4) 00:10:17.935 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.935 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.935 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.935 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.935 fio-3.35 00:10:17.935 Starting 4 threads 00:10:19.315 00:10:19.315 job0: (groupid=0, jobs=1): err= 0: pid=70581: Wed Nov 20 15:18:05 2024 00:10:19.315 read: IOPS=1755, BW=7021KiB/s (7189kB/s)(7028KiB/1001msec) 00:10:19.315 slat (nsec): min=8040, max=28493, avg=9948.58, stdev=2859.70 00:10:19.315 clat (usec): min=130, max=6127, avg=286.40, stdev=230.21 00:10:19.315 lat (usec): min=141, max=6136, avg=296.35, stdev=230.70 00:10:19.315 clat percentiles (usec): 00:10:19.315 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:10:19.315 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:10:19.315 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 347], 95.00th=[ 375], 00:10:19.315 | 99.00th=[ 453], 99.50th=[ 881], 99.90th=[ 4146], 99.95th=[ 6128], 00:10:19.315 | 99.99th=[ 6128] 00:10:19.315 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:19.315 slat (usec): min=11, max=187, avg=19.95, stdev= 8.08 00:10:19.315 clat (usec): min=85, max=2434, avg=212.13, stdev=58.96 00:10:19.315 lat (usec): min=100, max=2450, avg=232.08, stdev=59.07 00:10:19.315 clat percentiles (usec): 00:10:19.315 | 1.00th=[ 147], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 190], 00:10:19.315 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:10:19.315 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 262], 00:10:19.315 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 553], 99.95th=[ 725], 00:10:19.315 | 99.99th=[ 2442] 00:10:19.315 bw ( KiB/s): min= 8192, max= 8192, per=18.06%, avg=8192.00, stdev= 0.00, samples=1 00:10:19.315 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:19.315 lat (usec) : 100=0.21%, 250=60.13%, 500=39.34%, 750=0.05%, 1000=0.03% 00:10:19.315 lat (msec) : 2=0.05%, 4=0.13%, 10=0.05% 00:10:19.315 cpu : usr=0.70%, sys=5.10%, ctx=3806, majf=0, minf=13 00:10:19.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.315 issued rwts: total=1757,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.315 job1: (groupid=0, jobs=1): err= 0: pid=70582: Wed Nov 20 15:18:05 2024 00:10:19.315 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:19.316 slat (nsec): min=8044, max=30919, avg=9534.63, stdev=2038.47 00:10:19.316 clat (usec): min=114, max=1556, avg=139.13, stdev=26.05 00:10:19.316 lat (usec): min=122, max=1565, avg=148.67, stdev=26.25 00:10:19.316 clat percentiles (usec): 00:10:19.316 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 131], 00:10:19.316 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:10:19.316 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 157], 00:10:19.316 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 258], 99.95th=[ 293], 00:10:19.316 | 99.99th=[ 1565] 00:10:19.316 write: IOPS=3755, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec); 0 zone resets 00:10:19.316 slat (usec): min=11, max=111, avg=14.61, stdev= 5.36 00:10:19.316 clat (usec): min=82, max=243, avg=107.61, stdev=10.58 00:10:19.316 lat (usec): min=94, max=344, avg=122.22, stdev=13.26 00:10:19.316 clat percentiles (usec): 00:10:19.316 | 1.00th=[ 90], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 100], 00:10:19.316 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:10:19.316 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 127], 00:10:19.316 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 165], 99.95th=[ 233], 00:10:19.316 | 99.99th=[ 243] 00:10:19.316 bw ( KiB/s): min=16384, max=16384, per=36.11%, avg=16384.00, stdev= 0.00, samples=1 00:10:19.316 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:19.316 lat (usec) : 100=11.66%, 250=88.29%, 500=0.04% 00:10:19.316 lat (msec) : 2=0.01% 00:10:19.316 cpu : usr=1.70%, sys=7.20%, ctx=7344, majf=0, minf=7 00:10:19.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.316 issued rwts: total=3584,3759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.316 job2: (groupid=0, jobs=1): err= 0: pid=70583: Wed Nov 20 15:18:05 2024 00:10:19.316 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:19.316 slat (nsec): min=8135, max=39969, avg=9617.07, stdev=2841.29 00:10:19.316 clat (usec): min=129, max=1933, avg=159.26, stdev=39.48 00:10:19.316 lat (usec): min=138, max=1942, avg=168.88, stdev=39.66 00:10:19.316 clat percentiles (usec): 00:10:19.316 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:10:19.316 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:10:19.316 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:10:19.316 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 529], 99.95th=[ 1045], 00:10:19.316 | 99.99th=[ 1942] 00:10:19.316 write: IOPS=3481, BW=13.6MiB/s (14.3MB/s)(13.6MiB/1001msec); 0 zone resets 00:10:19.316 slat (usec): min=11, max=122, avg=14.11, stdev= 5.47 00:10:19.316 clat (usec): min=94, max=450, avg=122.00, stdev=12.55 00:10:19.316 lat (usec): min=107, max=463, avg=136.11, stdev=14.92 00:10:19.316 clat percentiles (usec): 00:10:19.316 | 1.00th=[ 103], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 114], 00:10:19.316 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 124], 00:10:19.316 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 143], 00:10:19.316 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 233], 00:10:19.316 | 99.99th=[ 453] 00:10:19.316 bw ( KiB/s): min=14240, max=14240, per=31.39%, avg=14240.00, stdev= 0.00, samples=1 00:10:19.316 iops : min= 3560, max= 3560, avg=3560.00, stdev= 0.00, samples=1 00:10:19.316 lat (usec) : 100=0.24%, 250=99.66%, 500=0.02%, 750=0.05% 00:10:19.316 lat (msec) : 2=0.03% 00:10:19.316 cpu : usr=1.70%, sys=6.20%, ctx=6558, majf=0, minf=15 00:10:19.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.316 issued rwts: total=3072,3485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.316 job3: (groupid=0, jobs=1): err= 0: pid=70584: Wed Nov 20 15:18:05 2024 00:10:19.316 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:19.316 slat (nsec): min=8236, max=32529, avg=10805.33, stdev=2356.27 00:10:19.316 clat (usec): min=140, max=715, avg=252.29, stdev=35.92 00:10:19.316 lat (usec): min=149, max=727, avg=263.10, stdev=36.32 00:10:19.316 clat percentiles (usec): 00:10:19.316 | 1.00th=[ 149], 5.00th=[ 169], 10.00th=[ 227], 20.00th=[ 237], 00:10:19.316 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:10:19.316 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 297], 00:10:19.316 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 449], 99.95th=[ 453], 00:10:19.316 | 99.99th=[ 717] 00:10:19.316 write: IOPS=2058, BW=8236KiB/s (8433kB/s)(8244KiB/1001msec); 0 zone resets 00:10:19.316 slat (usec): min=12, max=107, avg=19.57, stdev= 7.90 00:10:19.316 clat (usec): min=81, max=960, avg=201.59, stdev=41.60 00:10:19.316 lat (usec): min=97, max=999, avg=221.16, stdev=42.13 00:10:19.316 clat percentiles (usec): 00:10:19.316 | 1.00th=[ 108], 5.00th=[ 128], 10.00th=[ 155], 20.00th=[ 182], 00:10:19.316 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:10:19.316 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 245], 95.00th=[ 262], 00:10:19.316 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 449], 99.95th=[ 652], 00:10:19.316 | 99.99th=[ 963] 00:10:19.316 bw ( KiB/s): min= 8192, max= 8192, per=18.06%, avg=8192.00, stdev= 0.00, samples=1 00:10:19.316 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:19.316 lat (usec) : 100=0.24%, 250=68.61%, 500=31.08%, 750=0.05%, 1000=0.02% 00:10:19.316 cpu : usr=1.50%, sys=4.70%, ctx=4112, majf=0, minf=13 00:10:19.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.316 issued rwts: total=2048,2061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.316 00:10:19.316 Run status group 0 (all jobs): 00:10:19.316 READ: bw=40.8MiB/s (42.8MB/s), 7021KiB/s-14.0MiB/s (7189kB/s-14.7MB/s), io=40.9MiB (42.8MB), run=1001-1001msec 00:10:19.316 WRITE: bw=44.3MiB/s (46.5MB/s), 8184KiB/s-14.7MiB/s (8380kB/s-15.4MB/s), io=44.3MiB (46.5MB), run=1001-1001msec 00:10:19.316 00:10:19.316 Disk stats (read/write): 00:10:19.316 nvme0n1: ios=1585/1703, merge=0/0, ticks=470/371, in_queue=841, util=87.26% 00:10:19.316 nvme0n2: ios=3072/3232, merge=0/0, ticks=433/359, in_queue=792, util=86.97% 00:10:19.316 nvme0n3: ios=2560/3062, merge=0/0, ticks=417/386, in_queue=803, util=88.96% 00:10:19.316 nvme0n4: ios=1536/2003, merge=0/0, ticks=395/418, in_queue=813, util=89.61% 00:10:19.316 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:19.316 [global] 00:10:19.316 thread=1 00:10:19.316 invalidate=1 00:10:19.316 rw=write 00:10:19.316 time_based=1 00:10:19.316 runtime=1 00:10:19.316 ioengine=libaio 00:10:19.316 direct=1 00:10:19.316 bs=4096 00:10:19.316 iodepth=128 00:10:19.316 norandommap=0 00:10:19.316 numjobs=1 00:10:19.316 00:10:19.316 verify_dump=1 00:10:19.316 verify_backlog=512 00:10:19.316 verify_state_save=0 00:10:19.316 do_verify=1 00:10:19.316 verify=crc32c-intel 00:10:19.316 [job0] 00:10:19.316 filename=/dev/nvme0n1 00:10:19.316 [job1] 00:10:19.316 filename=/dev/nvme0n2 00:10:19.316 [job2] 00:10:19.316 filename=/dev/nvme0n3 00:10:19.316 [job3] 00:10:19.316 filename=/dev/nvme0n4 00:10:19.316 Could not set queue depth (nvme0n1) 00:10:19.316 Could not set queue depth (nvme0n2) 00:10:19.316 Could not set queue depth (nvme0n3) 00:10:19.316 Could not set queue depth (nvme0n4) 00:10:19.316 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.316 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.316 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.316 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.316 fio-3.35 00:10:19.316 Starting 4 threads 00:10:20.693 00:10:20.693 job0: (groupid=0, jobs=1): err= 0: pid=70645: Wed Nov 20 15:18:06 2024 00:10:20.693 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:10:20.693 slat (usec): min=7, max=2502, avg=80.01, stdev=287.40 00:10:20.693 clat (usec): min=8287, max=13354, avg=10935.17, stdev=815.19 00:10:20.693 lat (usec): min=8609, max=13388, avg=11015.18, stdev=795.25 00:10:20.693 clat percentiles (usec): 00:10:20.693 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:10:20.693 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:10:20.693 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12125], 00:10:20.693 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12911], 99.95th=[13304], 00:10:20.693 | 99.99th=[13304] 00:10:20.693 write: IOPS=6122, BW=23.9MiB/s (25.1MB/s)(23.9MiB/1001msec); 0 zone resets 00:10:20.693 slat (usec): min=13, max=2108, avg=77.36, stdev=232.00 00:10:20.693 clat (usec): min=385, max=12818, avg=10605.56, stdev=1054.41 00:10:20.693 lat (usec): min=418, max=13015, avg=10682.92, stdev=1063.06 00:10:20.693 clat percentiles (usec): 00:10:20.693 | 1.00th=[ 6325], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:10:20.693 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:10:20.693 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11863], 95.00th=[12256], 00:10:20.693 | 99.00th=[12649], 99.50th=[12649], 99.90th=[12780], 99.95th=[12780], 00:10:20.693 | 99.99th=[12780] 00:10:20.693 bw ( KiB/s): min=24576, max=24576, per=34.42%, avg=24576.00, stdev= 0.00, samples=1 00:10:20.693 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:10:20.693 lat (usec) : 500=0.03%, 750=0.02% 00:10:20.693 lat (msec) : 4=0.33%, 10=14.62%, 20=85.00% 00:10:20.693 cpu : usr=7.00%, sys=24.90%, ctx=923, majf=0, minf=9 00:10:20.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:20.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.693 issued rwts: total=5632,6129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.693 job1: (groupid=0, jobs=1): err= 0: pid=70646: Wed Nov 20 15:18:06 2024 00:10:20.693 read: IOPS=4663, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1002msec) 00:10:20.693 slat (usec): min=7, max=6556, avg=99.61, stdev=399.45 00:10:20.693 clat (usec): min=356, max=27380, avg=13078.75, stdev=4730.48 00:10:20.693 lat (usec): min=2145, max=27401, avg=13178.36, stdev=4758.89 00:10:20.693 clat percentiles (usec): 00:10:20.693 | 1.00th=[ 5407], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10421], 00:10:20.693 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:10:20.693 | 70.00th=[11863], 80.00th=[12518], 90.00th=[22414], 95.00th=[23725], 00:10:20.693 | 99.00th=[25822], 99.50th=[26084], 99.90th=[27395], 99.95th=[27395], 00:10:20.693 | 99.99th=[27395] 00:10:20.694 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:20.694 slat (usec): min=9, max=5706, avg=92.49, stdev=345.69 00:10:20.694 clat (usec): min=5897, max=27025, avg=12720.76, stdev=4751.95 00:10:20.694 lat (usec): min=5929, max=27060, avg=12813.24, stdev=4781.63 00:10:20.694 clat percentiles (usec): 00:10:20.694 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9765], 00:10:20.694 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10945], 00:10:20.694 | 70.00th=[11600], 80.00th=[12649], 90.00th=[22414], 95.00th=[22938], 00:10:20.694 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26870], 99.95th=[26870], 00:10:20.694 | 99.99th=[27132] 00:10:20.694 bw ( KiB/s): min=16384, max=16384, per=22.94%, avg=16384.00, stdev= 0.00, samples=1 00:10:20.694 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:20.694 lat (usec) : 500=0.01% 00:10:20.694 lat (msec) : 4=0.33%, 10=19.35%, 20=64.16%, 50=16.15% 00:10:20.694 cpu : usr=5.19%, sys=20.98%, ctx=925, majf=0, minf=11 00:10:20.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:20.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.694 issued rwts: total=4673,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.694 job2: (groupid=0, jobs=1): err= 0: pid=70647: Wed Nov 20 15:18:06 2024 00:10:20.694 read: IOPS=3516, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1003msec) 00:10:20.694 slat (usec): min=5, max=8222, avg=156.32, stdev=729.12 00:10:20.694 clat (usec): min=795, max=33318, avg=20142.68, stdev=7829.18 00:10:20.694 lat (usec): min=3538, max=33338, avg=20299.01, stdev=7860.46 00:10:20.694 clat percentiles (usec): 00:10:20.694 | 1.00th=[ 9765], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:10:20.694 | 30.00th=[12518], 40.00th=[13304], 50.00th=[20841], 60.00th=[24511], 00:10:20.694 | 70.00th=[26608], 80.00th=[28705], 90.00th=[30278], 95.00th=[30802], 00:10:20.694 | 99.00th=[32375], 99.50th=[32637], 99.90th=[33424], 99.95th=[33424], 00:10:20.694 | 99.99th=[33424] 00:10:20.694 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:10:20.694 slat (usec): min=7, max=4545, avg=113.41, stdev=404.25 00:10:20.694 clat (usec): min=8536, max=28638, avg=15530.23, stdev=4513.89 00:10:20.694 lat (usec): min=8951, max=28670, avg=15643.64, stdev=4542.69 00:10:20.694 clat percentiles (usec): 00:10:20.694 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11338], 00:10:20.694 | 30.00th=[11600], 40.00th=[12518], 50.00th=[13304], 60.00th=[17171], 00:10:20.694 | 70.00th=[19006], 80.00th=[20317], 90.00th=[21890], 95.00th=[22938], 00:10:20.694 | 99.00th=[25297], 99.50th=[26084], 99.90th=[28705], 99.95th=[28705], 00:10:20.694 | 99.99th=[28705] 00:10:20.694 bw ( KiB/s): min=10452, max=18240, per=20.09%, avg=14346.00, stdev=5506.95, samples=2 00:10:20.694 iops : min= 2613, max= 4560, avg=3586.50, stdev=1376.74, samples=2 00:10:20.694 lat (usec) : 1000=0.01% 00:10:20.694 lat (msec) : 4=0.15%, 10=1.31%, 20=61.65%, 50=36.87% 00:10:20.694 cpu : usr=4.59%, sys=13.97%, ctx=785, majf=0, minf=12 00:10:20.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:20.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.694 issued rwts: total=3527,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.694 job3: (groupid=0, jobs=1): err= 0: pid=70648: Wed Nov 20 15:18:06 2024 00:10:20.694 read: IOPS=2615, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1002msec) 00:10:20.694 slat (usec): min=7, max=8714, avg=186.51, stdev=769.41 00:10:20.694 clat (usec): min=653, max=32544, avg=24221.92, stdev=4625.07 00:10:20.694 lat (usec): min=2663, max=32564, avg=24408.43, stdev=4611.21 00:10:20.694 clat percentiles (usec): 00:10:20.694 | 1.00th=[ 4359], 5.00th=[17957], 10.00th=[20055], 20.00th=[21627], 00:10:20.694 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23987], 60.00th=[25297], 00:10:20.694 | 70.00th=[26870], 80.00th=[27657], 90.00th=[29754], 95.00th=[30802], 00:10:20.694 | 99.00th=[32375], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:10:20.694 | 99.99th=[32637] 00:10:20.694 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:10:20.694 slat (usec): min=9, max=5557, avg=152.57, stdev=531.14 00:10:20.694 clat (usec): min=10697, max=27747, avg=20347.50, stdev=2521.49 00:10:20.694 lat (usec): min=11704, max=27779, avg=20500.07, stdev=2519.38 00:10:20.694 clat percentiles (usec): 00:10:20.694 | 1.00th=[12780], 5.00th=[16450], 10.00th=[17171], 20.00th=[17957], 00:10:20.694 | 30.00th=[19006], 40.00th=[19792], 50.00th=[20579], 60.00th=[21365], 00:10:20.694 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23200], 95.00th=[24249], 00:10:20.694 | 99.00th=[26608], 99.50th=[27132], 99.90th=[27395], 99.95th=[27657], 00:10:20.694 | 99.99th=[27657] 00:10:20.694 bw ( KiB/s): min=11775, max=12288, per=16.85%, avg=12031.50, stdev=362.75, samples=2 00:10:20.694 iops : min= 2943, max= 3072, avg=3007.50, stdev=91.22, samples=2 00:10:20.694 lat (usec) : 750=0.02% 00:10:20.694 lat (msec) : 4=0.39%, 10=0.56%, 20=26.28%, 50=72.76% 00:10:20.694 cpu : usr=3.80%, sys=12.89%, ctx=888, majf=0, minf=13 00:10:20.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:20.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.694 issued rwts: total=2621,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.694 00:10:20.694 Run status group 0 (all jobs): 00:10:20.694 READ: bw=64.1MiB/s (67.2MB/s), 10.2MiB/s-22.0MiB/s (10.7MB/s-23.0MB/s), io=64.3MiB (67.4MB), run=1001-1003msec 00:10:20.694 WRITE: bw=69.7MiB/s (73.1MB/s), 12.0MiB/s-23.9MiB/s (12.6MB/s-25.1MB/s), io=69.9MiB (73.3MB), run=1001-1003msec 00:10:20.694 00:10:20.694 Disk stats (read/write): 00:10:20.694 nvme0n1: ios=5072/5120, merge=0/0, ticks=12308/11027, in_queue=23335, util=88.18% 00:10:20.694 nvme0n2: ios=4049/4096, merge=0/0, ticks=12425/10584, in_queue=23009, util=87.73% 00:10:20.694 nvme0n3: ios=3072/3281, merge=0/0, ticks=13590/10023, in_queue=23613, util=88.54% 00:10:20.694 nvme0n4: ios=2310/2560, merge=0/0, ticks=12534/10820, in_queue=23354, util=88.68% 00:10:20.694 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:20.694 [global] 00:10:20.694 thread=1 00:10:20.694 invalidate=1 00:10:20.694 rw=randwrite 00:10:20.694 time_based=1 00:10:20.694 runtime=1 00:10:20.694 ioengine=libaio 00:10:20.694 direct=1 00:10:20.694 bs=4096 00:10:20.694 iodepth=128 00:10:20.694 norandommap=0 00:10:20.694 numjobs=1 00:10:20.694 00:10:20.694 verify_dump=1 00:10:20.694 verify_backlog=512 00:10:20.694 verify_state_save=0 00:10:20.694 do_verify=1 00:10:20.694 verify=crc32c-intel 00:10:20.694 [job0] 00:10:20.694 filename=/dev/nvme0n1 00:10:20.694 [job1] 00:10:20.694 filename=/dev/nvme0n2 00:10:20.694 [job2] 00:10:20.694 filename=/dev/nvme0n3 00:10:20.694 [job3] 00:10:20.694 filename=/dev/nvme0n4 00:10:20.694 Could not set queue depth (nvme0n1) 00:10:20.694 Could not set queue depth (nvme0n2) 00:10:20.694 Could not set queue depth (nvme0n3) 00:10:20.694 Could not set queue depth (nvme0n4) 00:10:20.694 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.694 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.694 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.695 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.695 fio-3.35 00:10:20.695 Starting 4 threads 00:10:22.075 00:10:22.075 job0: (groupid=0, jobs=1): err= 0: pid=70701: Wed Nov 20 15:18:08 2024 00:10:22.075 read: IOPS=2962, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1005msec) 00:10:22.075 slat (usec): min=3, max=12380, avg=178.80, stdev=884.75 00:10:22.075 clat (usec): min=2745, max=42663, avg=23161.31, stdev=5210.90 00:10:22.075 lat (usec): min=4895, max=42672, avg=23340.12, stdev=5183.19 00:10:22.075 clat percentiles (usec): 00:10:22.075 | 1.00th=[ 7635], 5.00th=[17171], 10.00th=[18744], 20.00th=[19268], 00:10:22.075 | 30.00th=[21103], 40.00th=[21890], 50.00th=[22414], 60.00th=[23200], 00:10:22.075 | 70.00th=[24511], 80.00th=[25560], 90.00th=[28181], 95.00th=[35390], 00:10:22.075 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:22.075 | 99.99th=[42730] 00:10:22.075 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:10:22.075 slat (usec): min=10, max=7300, avg=140.84, stdev=631.51 00:10:22.075 clat (usec): min=11238, max=33329, avg=18781.60, stdev=4066.09 00:10:22.075 lat (usec): min=13479, max=33361, avg=18922.44, stdev=4042.37 00:10:22.075 clat percentiles (usec): 00:10:22.075 | 1.00th=[13566], 5.00th=[13960], 10.00th=[14353], 20.00th=[15270], 00:10:22.075 | 30.00th=[15795], 40.00th=[16909], 50.00th=[17433], 60.00th=[19006], 00:10:22.075 | 70.00th=[20579], 80.00th=[21890], 90.00th=[24773], 95.00th=[25822], 00:10:22.075 | 99.00th=[32637], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:10:22.075 | 99.99th=[33424] 00:10:22.075 bw ( KiB/s): min=12263, max=12288, per=21.46%, avg=12275.50, stdev=17.68, samples=2 00:10:22.075 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:10:22.075 lat (msec) : 4=0.02%, 10=0.81%, 20=43.43%, 50=55.74% 00:10:22.075 cpu : usr=3.59%, sys=11.65%, ctx=305, majf=0, minf=3 00:10:22.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:22.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.075 issued rwts: total=2977,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.075 job1: (groupid=0, jobs=1): err= 0: pid=70702: Wed Nov 20 15:18:08 2024 00:10:22.075 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:22.075 slat (usec): min=9, max=9246, avg=165.28, stdev=745.35 00:10:22.075 clat (usec): min=12301, max=34165, avg=21950.57, stdev=4445.26 00:10:22.075 lat (usec): min=14545, max=34198, avg=22115.85, stdev=4419.21 00:10:22.075 clat percentiles (usec): 00:10:22.075 | 1.00th=[14746], 5.00th=[17695], 10.00th=[18220], 20.00th=[18482], 00:10:22.075 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19792], 60.00th=[21627], 00:10:22.075 | 70.00th=[23200], 80.00th=[26608], 90.00th=[29230], 95.00th=[30540], 00:10:22.075 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:10:22.075 | 99.99th=[34341] 00:10:22.075 write: IOPS=3319, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1003msec); 0 zone resets 00:10:22.075 slat (usec): min=23, max=6061, avg=135.01, stdev=549.05 00:10:22.075 clat (usec): min=1495, max=31919, avg=17700.10, stdev=5277.82 00:10:22.075 lat (usec): min=3936, max=31951, avg=17835.12, stdev=5289.04 00:10:22.075 clat percentiles (usec): 00:10:22.075 | 1.00th=[ 7111], 5.00th=[12780], 10.00th=[13042], 20.00th=[13435], 00:10:22.075 | 30.00th=[13698], 40.00th=[15401], 50.00th=[16319], 60.00th=[17171], 00:10:22.075 | 70.00th=[19006], 80.00th=[23725], 90.00th=[25822], 95.00th=[28705], 00:10:22.075 | 99.00th=[31589], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:10:22.075 | 99.99th=[31851] 00:10:22.075 bw ( KiB/s): min=12312, max=13293, per=22.38%, avg=12802.50, stdev=693.67, samples=2 00:10:22.075 iops : min= 3078, max= 3323, avg=3200.50, stdev=173.24, samples=2 00:10:22.075 lat (msec) : 2=0.02%, 4=0.03%, 10=0.97%, 20=61.43%, 50=37.56% 00:10:22.075 cpu : usr=4.69%, sys=13.37%, ctx=303, majf=0, minf=1 00:10:22.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:22.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.075 issued rwts: total=3072,3329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.075 job2: (groupid=0, jobs=1): err= 0: pid=70703: Wed Nov 20 15:18:08 2024 00:10:22.075 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:22.075 slat (usec): min=11, max=6332, avg=113.30, stdev=554.63 00:10:22.075 clat (usec): min=8354, max=26206, avg=15207.51, stdev=4005.85 00:10:22.075 lat (usec): min=8387, max=26254, avg=15320.80, stdev=4055.43 00:10:22.075 clat percentiles (usec): 00:10:22.075 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10814], 00:10:22.075 | 30.00th=[11338], 40.00th=[14222], 50.00th=[16319], 60.00th=[16909], 00:10:22.075 | 70.00th=[17957], 80.00th=[19006], 90.00th=[19792], 95.00th=[21365], 00:10:22.075 | 99.00th=[22414], 99.50th=[23200], 99.90th=[25035], 99.95th=[25297], 00:10:22.075 | 99.99th=[26084] 00:10:22.075 write: IOPS=4543, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1004msec); 0 zone resets 00:10:22.075 slat (usec): min=17, max=4941, avg=104.64, stdev=435.33 00:10:22.075 clat (usec): min=2966, max=23775, avg=14096.46, stdev=3309.76 00:10:22.075 lat (usec): min=4502, max=23807, avg=14201.10, stdev=3330.53 00:10:22.075 clat percentiles (usec): 00:10:22.075 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10552], 00:10:22.075 | 30.00th=[12518], 40.00th=[13698], 50.00th=[14222], 60.00th=[15401], 00:10:22.075 | 70.00th=[16188], 80.00th=[16909], 90.00th=[17957], 95.00th=[19006], 00:10:22.075 | 99.00th=[20579], 99.50th=[20841], 99.90th=[22152], 99.95th=[22152], 00:10:22.075 | 99.99th=[23725] 00:10:22.075 bw ( KiB/s): min=14962, max=20521, per=31.02%, avg=17741.50, stdev=3930.81, samples=2 00:10:22.075 iops : min= 3740, max= 5130, avg=4435.00, stdev=982.88, samples=2 00:10:22.075 lat (msec) : 4=0.01%, 10=13.85%, 20=80.48%, 50=5.66% 00:10:22.075 cpu : usr=6.78%, sys=17.85%, ctx=409, majf=0, minf=6 00:10:22.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:22.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.075 issued rwts: total=4096,4562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.075 job3: (groupid=0, jobs=1): err= 0: pid=70704: Wed Nov 20 15:18:08 2024 00:10:22.075 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:22.075 slat (usec): min=10, max=6024, avg=161.52, stdev=664.92 00:10:22.075 clat (usec): min=13276, max=35969, avg=21079.08, stdev=4296.14 00:10:22.075 lat (usec): min=13480, max=36001, avg=21240.61, stdev=4288.34 00:10:22.075 clat percentiles (usec): 00:10:22.075 | 1.00th=[13829], 5.00th=[15926], 10.00th=[16712], 20.00th=[17957], 00:10:22.075 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19530], 60.00th=[20317], 00:10:22.075 | 70.00th=[22676], 80.00th=[25822], 90.00th=[27395], 95.00th=[28705], 00:10:22.075 | 99.00th=[31851], 99.50th=[33817], 99.90th=[35914], 99.95th=[35914], 00:10:22.075 | 99.99th=[35914] 00:10:22.075 write: IOPS=3394, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1004msec); 0 zone resets 00:10:22.075 slat (usec): min=9, max=7535, avg=135.61, stdev=580.16 00:10:22.075 clat (usec): min=2415, max=29054, avg=18128.25, stdev=4703.17 00:10:22.076 lat (usec): min=4824, max=30218, avg=18263.86, stdev=4706.83 00:10:22.076 clat percentiles (usec): 00:10:22.076 | 1.00th=[10945], 5.00th=[12256], 10.00th=[13042], 20.00th=[14091], 00:10:22.076 | 30.00th=[14353], 40.00th=[15401], 50.00th=[16909], 60.00th=[19006], 00:10:22.076 | 70.00th=[21890], 80.00th=[23462], 90.00th=[24773], 95.00th=[25297], 00:10:22.076 | 99.00th=[27657], 99.50th=[28181], 99.90th=[28967], 99.95th=[28967], 00:10:22.076 | 99.99th=[28967] 00:10:22.076 bw ( KiB/s): min=12288, max=13924, per=22.91%, avg=13106.00, stdev=1156.83, samples=2 00:10:22.076 iops : min= 3072, max= 3481, avg=3276.50, stdev=289.21, samples=2 00:10:22.076 lat (msec) : 4=0.02%, 10=0.49%, 20=59.72%, 50=39.77% 00:10:22.076 cpu : usr=5.38%, sys=12.26%, ctx=389, majf=0, minf=10 00:10:22.076 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:22.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.076 issued rwts: total=3072,3408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.076 00:10:22.076 Run status group 0 (all jobs): 00:10:22.076 READ: bw=51.4MiB/s (53.9MB/s), 11.6MiB/s-15.9MiB/s (12.1MB/s-16.7MB/s), io=51.6MiB (54.1MB), run=1003-1005msec 00:10:22.076 WRITE: bw=55.9MiB/s (58.6MB/s), 11.9MiB/s-17.7MiB/s (12.5MB/s-18.6MB/s), io=56.1MiB (58.9MB), run=1003-1005msec 00:10:22.076 00:10:22.076 Disk stats (read/write): 00:10:22.076 nvme0n1: ios=2514/2560, merge=0/0, ticks=14326/10198, in_queue=24524, util=89.07% 00:10:22.076 nvme0n2: ios=2595/2688, merge=0/0, ticks=13815/10838, in_queue=24653, util=87.85% 00:10:22.076 nvme0n3: ios=3584/4088, merge=0/0, ticks=16111/14506, in_queue=30617, util=89.14% 00:10:22.076 nvme0n4: ios=2560/3058, merge=0/0, ticks=12757/10820, in_queue=23577, util=88.77% 00:10:22.076 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:22.076 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70718 00:10:22.076 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:22.076 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:22.076 [global] 00:10:22.076 thread=1 00:10:22.076 invalidate=1 00:10:22.076 rw=read 00:10:22.076 time_based=1 00:10:22.076 runtime=10 00:10:22.076 ioengine=libaio 00:10:22.076 direct=1 00:10:22.076 bs=4096 00:10:22.076 iodepth=1 00:10:22.076 norandommap=1 00:10:22.076 numjobs=1 00:10:22.076 00:10:22.076 [job0] 00:10:22.076 filename=/dev/nvme0n1 00:10:22.076 [job1] 00:10:22.076 filename=/dev/nvme0n2 00:10:22.076 [job2] 00:10:22.076 filename=/dev/nvme0n3 00:10:22.076 [job3] 00:10:22.076 filename=/dev/nvme0n4 00:10:22.076 Could not set queue depth (nvme0n1) 00:10:22.076 Could not set queue depth (nvme0n2) 00:10:22.076 Could not set queue depth (nvme0n3) 00:10:22.076 Could not set queue depth (nvme0n4) 00:10:22.335 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.335 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.335 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.335 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.335 fio-3.35 00:10:22.335 Starting 4 threads 00:10:24.869 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:25.128 fio: pid=70771, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.128 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=68505600, buflen=4096 00:10:25.128 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:25.387 fio: pid=70770, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.387 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46137344, buflen=4096 00:10:25.387 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.387 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:25.646 fio: pid=70768, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.646 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=18251776, buflen=4096 00:10:25.646 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.646 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:25.906 fio: pid=70769, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.906 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61116416, buflen=4096 00:10:25.906 00:10:25.906 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70768: Wed Nov 20 15:18:12 2024 00:10:25.906 read: IOPS=6352, BW=24.8MiB/s (26.0MB/s)(81.4MiB/3281msec) 00:10:25.906 slat (usec): min=7, max=14167, avg=11.47, stdev=166.16 00:10:25.906 clat (usec): min=98, max=3069, avg=145.15, stdev=36.30 00:10:25.906 lat (usec): min=106, max=14343, avg=156.63, stdev=170.55 00:10:25.906 clat percentiles (usec): 00:10:25.906 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:10:25.906 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:10:25.906 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:10:25.906 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 231], 99.95th=[ 515], 00:10:25.906 | 99.99th=[ 2442] 00:10:25.906 bw ( KiB/s): min=23683, max=26866, per=35.22%, avg=25405.33, stdev=1177.84, samples=6 00:10:25.906 iops : min= 5920, max= 6716, avg=6350.83, stdev=294.57, samples=6 00:10:25.906 lat (usec) : 100=0.01%, 250=99.89%, 500=0.04%, 750=0.03%, 1000=0.01% 00:10:25.906 lat (msec) : 2=0.01%, 4=0.01% 00:10:25.906 cpu : usr=1.13%, sys=5.37%, ctx=20847, majf=0, minf=1 00:10:25.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.906 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.906 issued rwts: total=20841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.906 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70769: Wed Nov 20 15:18:12 2024 00:10:25.906 read: IOPS=4221, BW=16.5MiB/s (17.3MB/s)(58.3MiB/3535msec) 00:10:25.906 slat (usec): min=7, max=11894, avg=14.63, stdev=198.02 00:10:25.906 clat (usec): min=89, max=2414, avg=221.32, stdev=75.21 00:10:25.906 lat (usec): min=97, max=12039, avg=235.95, stdev=212.15 00:10:25.906 clat percentiles (usec): 00:10:25.906 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 105], 20.00th=[ 131], 00:10:25.906 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:10:25.906 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:25.906 | 99.00th=[ 306], 99.50th=[ 408], 99.90th=[ 766], 99.95th=[ 881], 00:10:25.906 | 99.99th=[ 2343] 00:10:25.906 bw ( KiB/s): min=14491, max=16084, per=21.06%, avg=15195.17, stdev=557.19, samples=6 00:10:25.906 iops : min= 3622, max= 4021, avg=3798.50, stdev=139.52, samples=6 00:10:25.906 lat (usec) : 100=4.79%, 250=52.43%, 500=42.46%, 750=0.21%, 1000=0.06% 00:10:25.906 lat (msec) : 2=0.03%, 4=0.01% 00:10:25.906 cpu : usr=0.68%, sys=4.13%, ctx=14928, majf=0, minf=1 00:10:25.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.906 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.906 issued rwts: total=14922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.906 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70770: Wed Nov 20 15:18:12 2024 00:10:25.906 read: IOPS=3684, BW=14.4MiB/s (15.1MB/s)(44.0MiB/3057msec) 00:10:25.906 slat (usec): min=8, max=11434, avg=11.52, stdev=130.38 00:10:25.906 clat (usec): min=112, max=4422, avg=258.94, stdev=91.85 00:10:25.906 lat (usec): min=121, max=11621, avg=270.47, stdev=160.32 00:10:25.906 clat percentiles (usec): 00:10:25.906 | 1.00th=[ 149], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:10:25.906 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:10:25.906 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:10:25.906 | 99.00th=[ 367], 99.50th=[ 420], 99.90th=[ 1004], 99.95th=[ 2409], 00:10:25.906 | 99.99th=[ 4424] 00:10:25.906 bw ( KiB/s): min=14339, max=15497, per=20.53%, avg=14806.40, stdev=431.53, samples=5 00:10:25.906 iops : min= 3584, max= 3874, avg=3701.20, stdev=107.99, samples=5 00:10:25.906 lat (usec) : 250=36.25%, 500=63.45%, 750=0.11%, 1000=0.08% 00:10:25.906 lat (msec) : 2=0.04%, 4=0.05%, 10=0.02% 00:10:25.906 cpu : usr=0.43%, sys=3.37%, ctx=11275, majf=0, minf=2 00:10:25.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.906 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.906 issued rwts: total=11265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.906 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70771: Wed Nov 20 15:18:12 2024 00:10:25.906 read: IOPS=5893, BW=23.0MiB/s (24.1MB/s)(65.3MiB/2838msec) 00:10:25.906 slat (usec): min=8, max=145, avg= 9.59, stdev= 3.51 00:10:25.906 clat (usec): min=109, max=1812, avg=159.23, stdev=21.80 00:10:25.906 lat (usec): min=138, max=1822, avg=168.83, stdev=22.44 00:10:25.906 clat percentiles (usec): 00:10:25.906 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 149], 00:10:25.906 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:10:25.906 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:10:25.906 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 233], 99.95th=[ 302], 00:10:25.906 | 99.99th=[ 1680] 00:10:25.906 bw ( KiB/s): min=22970, max=24518, per=32.76%, avg=23632.00, stdev=561.98, samples=5 00:10:25.906 iops : min= 5742, max= 6129, avg=5907.80, stdev=140.45, samples=5 00:10:25.906 lat (usec) : 250=99.92%, 500=0.05%, 750=0.01% 00:10:25.906 lat (msec) : 2=0.01% 00:10:25.906 cpu : usr=1.02%, sys=4.97%, ctx=16727, majf=0, minf=2 00:10:25.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.906 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.906 issued rwts: total=16726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.906 00:10:25.906 Run status group 0 (all jobs): 00:10:25.906 READ: bw=70.4MiB/s (73.9MB/s), 14.4MiB/s-24.8MiB/s (15.1MB/s-26.0MB/s), io=249MiB (261MB), run=2838-3535msec 00:10:25.906 00:10:25.906 Disk stats (read/write): 00:10:25.906 nvme0n1: ios=19791/0, merge=0/0, ticks=2926/0, in_queue=2926, util=94.89% 00:10:25.906 nvme0n2: ios=13505/0, merge=0/0, ticks=3202/0, in_queue=3202, util=95.21% 00:10:25.906 nvme0n3: ios=10694/0, merge=0/0, ticks=2779/0, in_queue=2779, util=96.47% 00:10:25.906 nvme0n4: ios=15472/0, merge=0/0, ticks=2485/0, in_queue=2485, util=96.54% 00:10:25.906 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.906 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:26.165 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.165 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:26.424 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.424 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:26.682 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.682 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:26.942 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.942 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70718 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.200 nvmf hotplug test: fio failed as expected 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:27.200 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:27.458 rmmod nvme_tcp 00:10:27.458 rmmod nvme_fabrics 00:10:27.458 rmmod nvme_keyring 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70230 ']' 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70230 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 70230 ']' 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 70230 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.458 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70230 00:10:27.716 killing process with pid 70230 00:10:27.716 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.716 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.716 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70230' 00:10:27.716 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 70230 00:10:27.716 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 70230 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:27.975 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.234 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:28.235 00:10:28.235 real 0m20.210s 00:10:28.235 user 1m14.821s 00:10:28.235 sys 0m10.059s 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.235 ************************************ 00:10:28.235 END TEST nvmf_fio_target 00:10:28.235 ************************************ 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.235 ************************************ 00:10:28.235 START TEST nvmf_bdevio 00:10:28.235 ************************************ 00:10:28.235 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:28.494 * Looking for test storage... 00:10:28.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.494 --rc genhtml_branch_coverage=1 00:10:28.494 --rc genhtml_function_coverage=1 00:10:28.494 --rc genhtml_legend=1 00:10:28.494 --rc geninfo_all_blocks=1 00:10:28.494 --rc geninfo_unexecuted_blocks=1 00:10:28.494 00:10:28.494 ' 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.494 --rc genhtml_branch_coverage=1 00:10:28.494 --rc genhtml_function_coverage=1 00:10:28.494 --rc genhtml_legend=1 00:10:28.494 --rc geninfo_all_blocks=1 00:10:28.494 --rc geninfo_unexecuted_blocks=1 00:10:28.494 00:10:28.494 ' 00:10:28.494 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.494 --rc genhtml_branch_coverage=1 00:10:28.494 --rc genhtml_function_coverage=1 00:10:28.494 --rc genhtml_legend=1 00:10:28.494 --rc geninfo_all_blocks=1 00:10:28.494 --rc geninfo_unexecuted_blocks=1 00:10:28.495 00:10:28.495 ' 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.495 --rc genhtml_branch_coverage=1 00:10:28.495 --rc genhtml_function_coverage=1 00:10:28.495 --rc genhtml_legend=1 00:10:28.495 --rc geninfo_all_blocks=1 00:10:28.495 --rc geninfo_unexecuted_blocks=1 00:10:28.495 00:10:28.495 ' 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.495 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:28.495 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:28.496 Cannot find device "nvmf_init_br" 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:28.496 Cannot find device "nvmf_init_br2" 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:28.496 Cannot find device "nvmf_tgt_br" 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:28.496 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.755 Cannot find device "nvmf_tgt_br2" 00:10:28.755 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:28.755 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:28.755 Cannot find device "nvmf_init_br" 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:28.755 Cannot find device "nvmf_init_br2" 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:28.755 Cannot find device "nvmf_tgt_br" 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:28.755 Cannot find device "nvmf_tgt_br2" 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:28.755 Cannot find device "nvmf_br" 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:28.755 Cannot find device "nvmf_init_if" 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:28.755 Cannot find device "nvmf_init_if2" 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:28.755 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:29.014 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:29.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:29.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:10:29.015 00:10:29.015 --- 10.0.0.3 ping statistics --- 00:10:29.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.015 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:29.015 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:29.015 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:10:29.015 00:10:29.015 --- 10.0.0.4 ping statistics --- 00:10:29.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.015 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:29.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:29.015 00:10:29.015 --- 10.0.0.1 ping statistics --- 00:10:29.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.015 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:29.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:29.015 00:10:29.015 --- 10.0.0.2 ping statistics --- 00:10:29.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.015 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=71144 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 71144 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 71144 ']' 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.015 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.274 [2024-11-20 15:18:15.529049] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:29.274 [2024-11-20 15:18:15.529140] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.274 [2024-11-20 15:18:15.685262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.533 [2024-11-20 15:18:15.767875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.533 [2024-11-20 15:18:15.767929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.533 [2024-11-20 15:18:15.767940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.533 [2024-11-20 15:18:15.767948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.533 [2024-11-20 15:18:15.767955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.533 [2024-11-20 15:18:15.769558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:29.533 [2024-11-20 15:18:15.769764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:29.533 [2024-11-20 15:18:15.769885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:29.533 [2024-11-20 15:18:15.770296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 [2024-11-20 15:18:16.502420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 Malloc0 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.101 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.102 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.102 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.361 [2024-11-20 15:18:16.593387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:30.361 { 00:10:30.361 "params": { 00:10:30.361 "name": "Nvme$subsystem", 00:10:30.361 "trtype": "$TEST_TRANSPORT", 00:10:30.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.361 "adrfam": "ipv4", 00:10:30.361 "trsvcid": "$NVMF_PORT", 00:10:30.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.361 "hdgst": ${hdgst:-false}, 00:10:30.361 "ddgst": ${ddgst:-false} 00:10:30.361 }, 00:10:30.361 "method": "bdev_nvme_attach_controller" 00:10:30.361 } 00:10:30.361 EOF 00:10:30.361 )") 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:30.361 15:18:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:30.361 "params": { 00:10:30.361 "name": "Nvme1", 00:10:30.361 "trtype": "tcp", 00:10:30.361 "traddr": "10.0.0.3", 00:10:30.361 "adrfam": "ipv4", 00:10:30.361 "trsvcid": "4420", 00:10:30.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.361 "hdgst": false, 00:10:30.361 "ddgst": false 00:10:30.361 }, 00:10:30.361 "method": "bdev_nvme_attach_controller" 00:10:30.361 }' 00:10:30.361 [2024-11-20 15:18:16.654668] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:30.361 [2024-11-20 15:18:16.654908] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71198 ] 00:10:30.361 [2024-11-20 15:18:16.804278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.619 [2024-11-20 15:18:16.890740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.619 [2024-11-20 15:18:16.890764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.619 [2024-11-20 15:18:16.890765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.887 I/O targets: 00:10:30.887 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:30.887 00:10:30.887 00:10:30.887 CUnit - A unit testing framework for C - Version 2.1-3 00:10:30.887 http://cunit.sourceforge.net/ 00:10:30.887 00:10:30.887 00:10:30.887 Suite: bdevio tests on: Nvme1n1 00:10:30.887 Test: blockdev write read block ...passed 00:10:30.887 Test: blockdev write zeroes read block ...passed 00:10:30.887 Test: blockdev write zeroes read no split ...passed 00:10:30.887 Test: blockdev write zeroes read split ...passed 00:10:30.887 Test: blockdev write zeroes read split partial ...passed 00:10:30.887 Test: blockdev reset ...[2024-11-20 15:18:17.233542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:30.887 [2024-11-20 15:18:17.233832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c86f50 (9): Bad file descriptor 00:10:30.887 [2024-11-20 15:18:17.250667] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:30.887 passed 00:10:30.887 Test: blockdev write read 8 blocks ...passed 00:10:30.887 Test: blockdev write read size > 128k ...passed 00:10:30.887 Test: blockdev write read invalid size ...passed 00:10:30.887 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:30.887 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:30.887 Test: blockdev write read max offset ...passed 00:10:31.156 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.156 Test: blockdev writev readv 8 blocks ...passed 00:10:31.156 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.156 Test: blockdev writev readv block ...passed 00:10:31.156 Test: blockdev writev readv size > 128k ...passed 00:10:31.156 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.156 Test: blockdev comparev and writev ...[2024-11-20 15:18:17.425974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.156 [2024-11-20 15:18:17.426180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.426217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.156 [2024-11-20 15:18:17.426228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.426483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.156 [2024-11-20 15:18:17.426496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.426510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.156 [2024-11-20 15:18:17.426519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.426748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.156 [2024-11-20 15:18:17.426760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.426773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.156 [2024-11-20 15:18:17.426782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.426994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.156 [2024-11-20 15:18:17.427005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.427018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.156 [2024-11-20 15:18:17.427026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:31.156 passed 00:10:31.156 Test: blockdev nvme passthru rw ...passed 00:10:31.156 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:18:17.510071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:31.156 [2024-11-20 15:18:17.510119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.510224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:31.156 [2024-11-20 15:18:17.510238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:31.156 [2024-11-20 15:18:17.510321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:31.156 [2024-11-20 15:18:17.510332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:31.156 passed 00:10:31.156 Test: blockdev nvme admin passthru ...[2024-11-20 15:18:17.510410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:31.156 [2024-11-20 15:18:17.510421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:31.156 passed 00:10:31.156 Test: blockdev copy ...passed 00:10:31.156 00:10:31.156 Run Summary: Type Total Ran Passed Failed Inactive 00:10:31.156 suites 1 1 n/a 0 0 00:10:31.156 tests 23 23 23 0 0 00:10:31.156 asserts 152 152 152 0 n/a 00:10:31.156 00:10:31.156 Elapsed time = 0.913 seconds 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.416 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.416 rmmod nvme_tcp 00:10:31.416 rmmod nvme_fabrics 00:10:31.675 rmmod nvme_keyring 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 71144 ']' 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 71144 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 71144 ']' 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 71144 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71144 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:31.675 killing process with pid 71144 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71144' 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 71144 00:10:31.675 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 71144 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:31.934 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:32.194 00:10:32.194 real 0m4.008s 00:10:32.194 user 0m12.412s 00:10:32.194 sys 0m1.313s 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.194 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.194 ************************************ 00:10:32.194 END TEST nvmf_bdevio 00:10:32.194 ************************************ 00:10:32.453 15:18:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:32.453 00:10:32.453 real 3m42.344s 00:10:32.453 user 11m8.144s 00:10:32.453 sys 1m20.444s 00:10:32.453 15:18:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.453 15:18:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.453 ************************************ 00:10:32.453 END TEST nvmf_target_core 00:10:32.453 ************************************ 00:10:32.453 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:32.453 15:18:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.453 15:18:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.453 15:18:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.453 ************************************ 00:10:32.453 START TEST nvmf_target_extra 00:10:32.453 ************************************ 00:10:32.453 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:32.453 * Looking for test storage... 00:10:32.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:32.453 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.453 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.453 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.714 --rc genhtml_branch_coverage=1 00:10:32.714 --rc genhtml_function_coverage=1 00:10:32.714 --rc genhtml_legend=1 00:10:32.714 --rc geninfo_all_blocks=1 00:10:32.714 --rc geninfo_unexecuted_blocks=1 00:10:32.714 00:10:32.714 ' 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.714 --rc genhtml_branch_coverage=1 00:10:32.714 --rc genhtml_function_coverage=1 00:10:32.714 --rc genhtml_legend=1 00:10:32.714 --rc geninfo_all_blocks=1 00:10:32.714 --rc geninfo_unexecuted_blocks=1 00:10:32.714 00:10:32.714 ' 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.714 --rc genhtml_branch_coverage=1 00:10:32.714 --rc genhtml_function_coverage=1 00:10:32.714 --rc genhtml_legend=1 00:10:32.714 --rc geninfo_all_blocks=1 00:10:32.714 --rc geninfo_unexecuted_blocks=1 00:10:32.714 00:10:32.714 ' 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.714 --rc genhtml_branch_coverage=1 00:10:32.714 --rc genhtml_function_coverage=1 00:10:32.714 --rc genhtml_legend=1 00:10:32.714 --rc geninfo_all_blocks=1 00:10:32.714 --rc geninfo_unexecuted_blocks=1 00:10:32.714 00:10:32.714 ' 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.714 15:18:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.714 15:18:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.715 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.715 ************************************ 00:10:32.715 START TEST nvmf_example 00:10:32.715 ************************************ 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:32.715 * Looking for test storage... 00:10:32.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.715 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.974 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.975 --rc genhtml_branch_coverage=1 00:10:32.975 --rc genhtml_function_coverage=1 00:10:32.975 --rc genhtml_legend=1 00:10:32.975 --rc geninfo_all_blocks=1 00:10:32.975 --rc geninfo_unexecuted_blocks=1 00:10:32.975 00:10:32.975 ' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.975 --rc genhtml_branch_coverage=1 00:10:32.975 --rc genhtml_function_coverage=1 00:10:32.975 --rc genhtml_legend=1 00:10:32.975 --rc geninfo_all_blocks=1 00:10:32.975 --rc geninfo_unexecuted_blocks=1 00:10:32.975 00:10:32.975 ' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.975 --rc genhtml_branch_coverage=1 00:10:32.975 --rc genhtml_function_coverage=1 00:10:32.975 --rc genhtml_legend=1 00:10:32.975 --rc geninfo_all_blocks=1 00:10:32.975 --rc geninfo_unexecuted_blocks=1 00:10:32.975 00:10:32.975 ' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.975 --rc genhtml_branch_coverage=1 00:10:32.975 --rc genhtml_function_coverage=1 00:10:32.975 --rc genhtml_legend=1 00:10:32.975 --rc geninfo_all_blocks=1 00:10:32.975 --rc geninfo_unexecuted_blocks=1 00:10:32.975 00:10:32.975 ' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.975 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.975 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:32.976 Cannot find device "nvmf_init_br" 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:32.976 Cannot find device "nvmf_init_br2" 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:32.976 Cannot find device "nvmf_tgt_br" 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.976 Cannot find device "nvmf_tgt_br2" 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:32.976 Cannot find device "nvmf_init_br" 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:10:32.976 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:33.235 Cannot find device "nvmf_init_br2" 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:33.235 Cannot find device "nvmf_tgt_br" 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:33.235 Cannot find device "nvmf_tgt_br2" 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:33.235 Cannot find device "nvmf_br" 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:33.235 Cannot find device "nvmf_init_if" 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:33.235 Cannot find device "nvmf_init_if2" 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:33.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:33.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:33.235 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:33.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:33.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:33.495 00:10:33.495 --- 10.0.0.3 ping statistics --- 00:10:33.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.495 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:33.495 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:33.495 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:10:33.495 00:10:33.495 --- 10.0.0.4 ping statistics --- 00:10:33.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.495 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:33.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:10:33.495 00:10:33.495 --- 10.0.0.1 ping statistics --- 00:10:33.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.495 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:33.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:33.495 00:10:33.495 --- 10.0.0.2 ping statistics --- 00:10:33.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.495 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71502 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.495 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71502 00:10:33.496 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 71502 ']' 00:10:33.496 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.496 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.496 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.496 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.496 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.432 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.432 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:34.432 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:34.432 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:34.432 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.692 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:34.692 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.692 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.692 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.692 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:34.692 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.692 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:10:34.692 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:46.945 Initializing NVMe Controllers 00:10:46.945 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:46.945 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:46.945 Initialization complete. Launching workers. 00:10:46.945 ======================================================== 00:10:46.945 Latency(us) 00:10:46.945 Device Information : IOPS MiB/s Average min max 00:10:46.945 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17696.96 69.13 3616.03 654.43 21583.97 00:10:46.945 ======================================================== 00:10:46.945 Total : 17696.96 69.13 3616.03 654.43 21583.97 00:10:46.945 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.945 rmmod nvme_tcp 00:10:46.945 rmmod nvme_fabrics 00:10:46.945 rmmod nvme_keyring 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 71502 ']' 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 71502 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 71502 ']' 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 71502 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71502 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:46.945 killing process with pid 71502 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71502' 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 71502 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 71502 00:10:46.945 nvmf threads initialize successfully 00:10:46.945 bdev subsystem init successfully 00:10:46.945 created a nvmf target service 00:10:46.945 create targets's poll groups done 00:10:46.945 all subsystems of target started 00:10:46.945 nvmf target is running 00:10:46.945 all subsystems of target stopped 00:10:46.945 destroy targets's poll groups done 00:10:46.945 destroyed the nvmf target service 00:10:46.945 bdev subsystem finish successfully 00:10:46.945 nvmf threads destroy successfully 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:46.945 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.946 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:46.946 00:10:46.946 real 0m13.077s 00:10:46.946 user 0m44.345s 00:10:46.946 sys 0m2.869s 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.946 ************************************ 00:10:46.946 END TEST nvmf_example 00:10:46.946 ************************************ 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.946 ************************************ 00:10:46.946 START TEST nvmf_filesystem 00:10:46.946 ************************************ 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:46.946 * Looking for test storage... 00:10:46.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.946 --rc genhtml_branch_coverage=1 00:10:46.946 --rc genhtml_function_coverage=1 00:10:46.946 --rc genhtml_legend=1 00:10:46.946 --rc geninfo_all_blocks=1 00:10:46.946 --rc geninfo_unexecuted_blocks=1 00:10:46.946 00:10:46.946 ' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.946 --rc genhtml_branch_coverage=1 00:10:46.946 --rc genhtml_function_coverage=1 00:10:46.946 --rc genhtml_legend=1 00:10:46.946 --rc geninfo_all_blocks=1 00:10:46.946 --rc geninfo_unexecuted_blocks=1 00:10:46.946 00:10:46.946 ' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.946 --rc genhtml_branch_coverage=1 00:10:46.946 --rc genhtml_function_coverage=1 00:10:46.946 --rc genhtml_legend=1 00:10:46.946 --rc geninfo_all_blocks=1 00:10:46.946 --rc geninfo_unexecuted_blocks=1 00:10:46.946 00:10:46.946 ' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.946 --rc genhtml_branch_coverage=1 00:10:46.946 --rc genhtml_function_coverage=1 00:10:46.946 --rc genhtml_legend=1 00:10:46.946 --rc geninfo_all_blocks=1 00:10:46.946 --rc geninfo_unexecuted_blocks=1 00:10:46.946 00:10:46.946 ' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:46.946 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:46.947 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:46.948 #define SPDK_CONFIG_H 00:10:46.948 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:46.948 #define SPDK_CONFIG_APPS 1 00:10:46.948 #define SPDK_CONFIG_ARCH native 00:10:46.948 #undef SPDK_CONFIG_ASAN 00:10:46.948 #define SPDK_CONFIG_AVAHI 1 00:10:46.948 #undef SPDK_CONFIG_CET 00:10:46.948 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:46.948 #define SPDK_CONFIG_COVERAGE 1 00:10:46.948 #define SPDK_CONFIG_CROSS_PREFIX 00:10:46.948 #undef SPDK_CONFIG_CRYPTO 00:10:46.948 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:46.948 #undef SPDK_CONFIG_CUSTOMOCF 00:10:46.948 #undef SPDK_CONFIG_DAOS 00:10:46.948 #define SPDK_CONFIG_DAOS_DIR 00:10:46.948 #define SPDK_CONFIG_DEBUG 1 00:10:46.948 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:46.948 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:46.948 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:46.948 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:46.948 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:46.948 #undef SPDK_CONFIG_DPDK_UADK 00:10:46.948 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:46.948 #define SPDK_CONFIG_EXAMPLES 1 00:10:46.948 #undef SPDK_CONFIG_FC 00:10:46.948 #define SPDK_CONFIG_FC_PATH 00:10:46.948 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:46.948 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:46.948 #define SPDK_CONFIG_FSDEV 1 00:10:46.948 #undef SPDK_CONFIG_FUSE 00:10:46.948 #undef SPDK_CONFIG_FUZZER 00:10:46.948 #define SPDK_CONFIG_FUZZER_LIB 00:10:46.948 #define SPDK_CONFIG_GOLANG 1 00:10:46.948 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:46.948 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:46.948 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:46.948 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:46.948 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:46.948 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:46.948 #undef SPDK_CONFIG_HAVE_LZ4 00:10:46.948 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:46.948 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:46.948 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:46.948 #define SPDK_CONFIG_IDXD 1 00:10:46.948 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:46.948 #undef SPDK_CONFIG_IPSEC_MB 00:10:46.948 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:46.948 #define SPDK_CONFIG_ISAL 1 00:10:46.948 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:46.948 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:46.948 #define SPDK_CONFIG_LIBDIR 00:10:46.948 #undef SPDK_CONFIG_LTO 00:10:46.948 #define SPDK_CONFIG_MAX_LCORES 128 00:10:46.948 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:46.948 #define SPDK_CONFIG_NVME_CUSE 1 00:10:46.948 #undef SPDK_CONFIG_OCF 00:10:46.948 #define SPDK_CONFIG_OCF_PATH 00:10:46.948 #define SPDK_CONFIG_OPENSSL_PATH 00:10:46.948 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:46.948 #define SPDK_CONFIG_PGO_DIR 00:10:46.948 #undef SPDK_CONFIG_PGO_USE 00:10:46.948 #define SPDK_CONFIG_PREFIX /usr/local 00:10:46.948 #undef SPDK_CONFIG_RAID5F 00:10:46.948 #undef SPDK_CONFIG_RBD 00:10:46.948 #define SPDK_CONFIG_RDMA 1 00:10:46.948 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:46.948 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:46.948 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:46.948 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:46.948 #define SPDK_CONFIG_SHARED 1 00:10:46.948 #undef SPDK_CONFIG_SMA 00:10:46.948 #define SPDK_CONFIG_TESTS 1 00:10:46.948 #undef SPDK_CONFIG_TSAN 00:10:46.948 #define SPDK_CONFIG_UBLK 1 00:10:46.948 #define SPDK_CONFIG_UBSAN 1 00:10:46.948 #undef SPDK_CONFIG_UNIT_TESTS 00:10:46.948 #undef SPDK_CONFIG_URING 00:10:46.948 #define SPDK_CONFIG_URING_PATH 00:10:46.948 #undef SPDK_CONFIG_URING_ZNS 00:10:46.948 #define SPDK_CONFIG_USDT 1 00:10:46.948 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:46.948 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:46.948 #undef SPDK_CONFIG_VFIO_USER 00:10:46.948 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:46.948 #define SPDK_CONFIG_VHOST 1 00:10:46.948 #define SPDK_CONFIG_VIRTIO 1 00:10:46.948 #undef SPDK_CONFIG_VTUNE 00:10:46.948 #define SPDK_CONFIG_VTUNE_DIR 00:10:46.948 #define SPDK_CONFIG_WERROR 1 00:10:46.948 #define SPDK_CONFIG_WPDK_DIR 00:10:46.948 #undef SPDK_CONFIG_XNVME 00:10:46.948 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:10:46.948 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:46.949 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:46.950 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71776 ]] 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71776 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.EsFSKb 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.EsFSKb/tests/target /tmp/spdk.EsFSKb 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979774976 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589356544 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256394240 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979774976 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589356544 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266286080 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:46.951 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=92566896640 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7135883264 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:46.952 * Looking for test storage... 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13979774976 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.952 --rc genhtml_branch_coverage=1 00:10:46.952 --rc genhtml_function_coverage=1 00:10:46.952 --rc genhtml_legend=1 00:10:46.952 --rc geninfo_all_blocks=1 00:10:46.952 --rc geninfo_unexecuted_blocks=1 00:10:46.952 00:10:46.952 ' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.952 --rc genhtml_branch_coverage=1 00:10:46.952 --rc genhtml_function_coverage=1 00:10:46.952 --rc genhtml_legend=1 00:10:46.952 --rc geninfo_all_blocks=1 00:10:46.952 --rc geninfo_unexecuted_blocks=1 00:10:46.952 00:10:46.952 ' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.952 --rc genhtml_branch_coverage=1 00:10:46.952 --rc genhtml_function_coverage=1 00:10:46.952 --rc genhtml_legend=1 00:10:46.952 --rc geninfo_all_blocks=1 00:10:46.952 --rc geninfo_unexecuted_blocks=1 00:10:46.952 00:10:46.952 ' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.952 --rc genhtml_branch_coverage=1 00:10:46.952 --rc genhtml_function_coverage=1 00:10:46.952 --rc genhtml_legend=1 00:10:46.952 --rc geninfo_all_blocks=1 00:10:46.952 --rc geninfo_unexecuted_blocks=1 00:10:46.952 00:10:46.952 ' 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.952 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.953 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:46.953 Cannot find device "nvmf_init_br" 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:46.953 Cannot find device "nvmf_init_br2" 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:46.953 Cannot find device "nvmf_tgt_br" 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.953 Cannot find device "nvmf_tgt_br2" 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:46.953 Cannot find device "nvmf_init_br" 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:46.953 Cannot find device "nvmf_init_br2" 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:46.953 Cannot find device "nvmf_tgt_br" 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:46.953 Cannot find device "nvmf_tgt_br2" 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:10:46.953 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:46.953 Cannot find device "nvmf_br" 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:46.953 Cannot find device "nvmf_init_if" 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:46.953 Cannot find device "nvmf_init_if2" 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.953 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:46.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:46.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.156 ms 00:10:46.954 00:10:46.954 --- 10.0.0.3 ping statistics --- 00:10:46.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.954 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:46.954 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:46.954 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:10:46.954 00:10:46.954 --- 10.0.0.4 ping statistics --- 00:10:46.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.954 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:46.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:10:46.954 00:10:46.954 --- 10.0.0.1 ping statistics --- 00:10:46.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.954 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:46.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:10:46.954 00:10:46.954 --- 10.0.0.2 ping statistics --- 00:10:46.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.954 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 ************************************ 00:10:46.954 START TEST nvmf_filesystem_no_in_capsule 00:10:46.954 ************************************ 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71966 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71966 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71966 ']' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.954 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 [2024-11-20 15:18:33.411456] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:10:46.954 [2024-11-20 15:18:33.411583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.214 [2024-11-20 15:18:33.564336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.214 [2024-11-20 15:18:33.649560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.214 [2024-11-20 15:18:33.649898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.214 [2024-11-20 15:18:33.650036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.214 [2024-11-20 15:18:33.650088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.214 [2024-11-20 15:18:33.650115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.214 [2024-11-20 15:18:33.651614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.214 [2024-11-20 15:18:33.651726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.214 [2024-11-20 15:18:33.651813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.214 [2024-11-20 15:18:33.651817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.150 [2024-11-20 15:18:34.420998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.150 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.409 Malloc1 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.409 [2024-11-20 15:18:34.662456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:48.409 { 00:10:48.409 "aliases": [ 00:10:48.409 "030a6247-014c-4946-8fd4-21b3f381616e" 00:10:48.409 ], 00:10:48.409 "assigned_rate_limits": { 00:10:48.409 "r_mbytes_per_sec": 0, 00:10:48.409 "rw_ios_per_sec": 0, 00:10:48.409 "rw_mbytes_per_sec": 0, 00:10:48.409 "w_mbytes_per_sec": 0 00:10:48.409 }, 00:10:48.409 "block_size": 512, 00:10:48.409 "claim_type": "exclusive_write", 00:10:48.409 "claimed": true, 00:10:48.409 "driver_specific": {}, 00:10:48.409 "memory_domains": [ 00:10:48.409 { 00:10:48.409 "dma_device_id": "system", 00:10:48.409 "dma_device_type": 1 00:10:48.409 }, 00:10:48.409 { 00:10:48.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.409 "dma_device_type": 2 00:10:48.409 } 00:10:48.409 ], 00:10:48.409 "name": "Malloc1", 00:10:48.409 "num_blocks": 1048576, 00:10:48.409 "product_name": "Malloc disk", 00:10:48.409 "supported_io_types": { 00:10:48.409 "abort": true, 00:10:48.409 "compare": false, 00:10:48.409 "compare_and_write": false, 00:10:48.409 "copy": true, 00:10:48.409 "flush": true, 00:10:48.409 "get_zone_info": false, 00:10:48.409 "nvme_admin": false, 00:10:48.409 "nvme_io": false, 00:10:48.409 "nvme_io_md": false, 00:10:48.409 "nvme_iov_md": false, 00:10:48.409 "read": true, 00:10:48.409 "reset": true, 00:10:48.409 "seek_data": false, 00:10:48.409 "seek_hole": false, 00:10:48.409 "unmap": true, 00:10:48.409 "write": true, 00:10:48.409 "write_zeroes": true, 00:10:48.409 "zcopy": true, 00:10:48.409 "zone_append": false, 00:10:48.409 "zone_management": false 00:10:48.409 }, 00:10:48.409 "uuid": "030a6247-014c-4946-8fd4-21b3f381616e", 00:10:48.409 "zoned": false 00:10:48.409 } 00:10:48.409 ]' 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:48.409 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:48.668 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.668 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:48.668 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.668 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:48.668 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.571 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.571 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.571 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.571 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:50.572 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.830 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:50.830 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:51.765 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:51.765 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:51.765 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.765 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.765 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.024 ************************************ 00:10:52.024 START TEST filesystem_ext4 00:10:52.024 ************************************ 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:52.024 mke2fs 1.47.0 (5-Feb-2023) 00:10:52.024 Discarding device blocks: 0/522240 done 00:10:52.024 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:52.024 Filesystem UUID: 4de9a706-94cb-41cb-8244-e341b9239dd2 00:10:52.024 Superblock backups stored on blocks: 00:10:52.024 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:52.024 00:10:52.024 Allocating group tables: 0/64 done 00:10:52.024 Writing inode tables: 0/64 done 00:10:52.024 Creating journal (8192 blocks): done 00:10:52.024 Writing superblocks and filesystem accounting information: 0/64 done 00:10:52.024 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:52.024 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.399 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.399 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71966 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.688 ************************************ 00:10:57.688 END TEST filesystem_ext4 00:10:57.688 ************************************ 00:10:57.688 00:10:57.688 real 0m5.689s 00:10:57.688 user 0m0.026s 00:10:57.688 sys 0m0.105s 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:57.688 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.688 ************************************ 00:10:57.688 START TEST filesystem_btrfs 00:10:57.688 ************************************ 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:57.688 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:57.962 btrfs-progs v6.8.1 00:10:57.962 See https://btrfs.readthedocs.io for more information. 00:10:57.962 00:10:57.962 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:57.962 NOTE: several default settings have changed in version 5.15, please make sure 00:10:57.962 this does not affect your deployments: 00:10:57.962 - DUP for metadata (-m dup) 00:10:57.962 - enabled no-holes (-O no-holes) 00:10:57.962 - enabled free-space-tree (-R free-space-tree) 00:10:57.962 00:10:57.962 Label: (null) 00:10:57.962 UUID: 076c1d36-4ca8-48f7-bee8-0fd1b75c29db 00:10:57.962 Node size: 16384 00:10:57.962 Sector size: 4096 (CPU page size: 4096) 00:10:57.962 Filesystem size: 510.00MiB 00:10:57.962 Block group profiles: 00:10:57.962 Data: single 8.00MiB 00:10:57.962 Metadata: DUP 32.00MiB 00:10:57.962 System: DUP 8.00MiB 00:10:57.962 SSD detected: yes 00:10:57.962 Zoned device: no 00:10:57.962 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:57.962 Checksum: crc32c 00:10:57.962 Number of devices: 1 00:10:57.962 Devices: 00:10:57.962 ID SIZE PATH 00:10:57.962 1 510.00MiB /dev/nvme0n1p1 00:10:57.962 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71966 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.962 ************************************ 00:10:57.962 END TEST filesystem_btrfs 00:10:57.962 ************************************ 00:10:57.962 00:10:57.962 real 0m0.338s 00:10:57.962 user 0m0.037s 00:10:57.962 sys 0m0.099s 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.962 ************************************ 00:10:57.962 START TEST filesystem_xfs 00:10:57.962 ************************************ 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:57.962 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:58.221 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:58.221 = sectsz=512 attr=2, projid32bit=1 00:10:58.221 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:58.221 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:58.221 data = bsize=4096 blocks=130560, imaxpct=25 00:10:58.221 = sunit=0 swidth=0 blks 00:10:58.221 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:58.221 log =internal log bsize=4096 blocks=16384, version=2 00:10:58.221 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:58.221 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:59.158 Discarding blocks...Done. 00:10:59.158 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:59.158 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.689 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.689 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:01.689 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.689 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:01.689 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:01.689 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71966 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.690 ************************************ 00:11:01.690 END TEST filesystem_xfs 00:11:01.690 ************************************ 00:11:01.690 00:11:01.690 real 0m3.199s 00:11:01.690 user 0m0.040s 00:11:01.690 sys 0m0.095s 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71966 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71966 ']' 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71966 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71966 00:11:01.690 killing process with pid 71966 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71966' 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71966 00:11:01.690 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71966 00:11:02.257 ************************************ 00:11:02.257 END TEST nvmf_filesystem_no_in_capsule 00:11:02.257 ************************************ 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:02.257 00:11:02.257 real 0m15.103s 00:11:02.257 user 0m56.695s 00:11:02.257 sys 0m3.521s 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 ************************************ 00:11:02.257 START TEST nvmf_filesystem_in_capsule 00:11:02.257 ************************************ 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72338 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72338 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 72338 ']' 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.257 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 [2024-11-20 15:18:48.592167] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:02.257 [2024-11-20 15:18:48.592504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.517 [2024-11-20 15:18:48.747768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.517 [2024-11-20 15:18:48.832727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.517 [2024-11-20 15:18:48.833752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.517 [2024-11-20 15:18:48.833776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.517 [2024-11-20 15:18:48.833788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.517 [2024-11-20 15:18:48.833795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.517 [2024-11-20 15:18:48.835277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.517 [2024-11-20 15:18:48.835452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.517 [2024-11-20 15:18:48.835457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.517 [2024-11-20 15:18:48.835365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.086 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.086 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:03.086 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.086 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.086 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.347 [2024-11-20 15:18:49.632009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.347 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.606 Malloc1 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.606 [2024-11-20 15:18:49.877490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:03.606 { 00:11:03.606 "aliases": [ 00:11:03.606 "d78777cb-b7a8-4c29-a5e9-09f86598e886" 00:11:03.606 ], 00:11:03.606 "assigned_rate_limits": { 00:11:03.606 "r_mbytes_per_sec": 0, 00:11:03.606 "rw_ios_per_sec": 0, 00:11:03.606 "rw_mbytes_per_sec": 0, 00:11:03.606 "w_mbytes_per_sec": 0 00:11:03.606 }, 00:11:03.606 "block_size": 512, 00:11:03.606 "claim_type": "exclusive_write", 00:11:03.606 "claimed": true, 00:11:03.606 "driver_specific": {}, 00:11:03.606 "memory_domains": [ 00:11:03.606 { 00:11:03.606 "dma_device_id": "system", 00:11:03.606 "dma_device_type": 1 00:11:03.606 }, 00:11:03.606 { 00:11:03.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.606 "dma_device_type": 2 00:11:03.606 } 00:11:03.606 ], 00:11:03.606 "name": "Malloc1", 00:11:03.606 "num_blocks": 1048576, 00:11:03.606 "product_name": "Malloc disk", 00:11:03.606 "supported_io_types": { 00:11:03.606 "abort": true, 00:11:03.606 "compare": false, 00:11:03.606 "compare_and_write": false, 00:11:03.606 "copy": true, 00:11:03.606 "flush": true, 00:11:03.606 "get_zone_info": false, 00:11:03.606 "nvme_admin": false, 00:11:03.606 "nvme_io": false, 00:11:03.606 "nvme_io_md": false, 00:11:03.606 "nvme_iov_md": false, 00:11:03.606 "read": true, 00:11:03.606 "reset": true, 00:11:03.606 "seek_data": false, 00:11:03.606 "seek_hole": false, 00:11:03.606 "unmap": true, 00:11:03.606 "write": true, 00:11:03.606 "write_zeroes": true, 00:11:03.606 "zcopy": true, 00:11:03.606 "zone_append": false, 00:11:03.606 "zone_management": false 00:11:03.606 }, 00:11:03.606 "uuid": "d78777cb-b7a8-4c29-a5e9-09f86598e886", 00:11:03.606 "zoned": false 00:11:03.606 } 00:11:03.606 ]' 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:03.606 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:03.606 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:03.606 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:03.606 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:03.607 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:03.607 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:03.867 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.867 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:03.867 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.867 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:03.867 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:05.772 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:06.031 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:06.031 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.031 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:06.031 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.969 ************************************ 00:11:06.969 START TEST filesystem_in_capsule_ext4 00:11:06.969 ************************************ 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:06.969 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:06.969 mke2fs 1.47.0 (5-Feb-2023) 00:11:07.228 Discarding device blocks: 0/522240 done 00:11:07.228 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:07.228 Filesystem UUID: 5ccd0178-01a0-4ff6-9919-251ffd982877 00:11:07.228 Superblock backups stored on blocks: 00:11:07.228 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:07.228 00:11:07.228 Allocating group tables: 0/64 done 00:11:07.228 Writing inode tables: 0/64 done 00:11:07.228 Creating journal (8192 blocks): done 00:11:07.228 Writing superblocks and filesystem accounting information: 0/64 done 00:11:07.228 00:11:07.228 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:07.228 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.523 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.523 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:12.523 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72338 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.782 ************************************ 00:11:12.782 END TEST filesystem_in_capsule_ext4 00:11:12.782 ************************************ 00:11:12.782 00:11:12.782 real 0m5.673s 00:11:12.782 user 0m0.032s 00:11:12.782 sys 0m0.094s 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.782 ************************************ 00:11:12.782 START TEST filesystem_in_capsule_btrfs 00:11:12.782 ************************************ 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.782 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:12.783 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:12.783 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:12.783 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:12.783 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:12.783 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:12.783 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:12.783 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:13.041 btrfs-progs v6.8.1 00:11:13.041 See https://btrfs.readthedocs.io for more information. 00:11:13.041 00:11:13.041 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:13.041 NOTE: several default settings have changed in version 5.15, please make sure 00:11:13.041 this does not affect your deployments: 00:11:13.041 - DUP for metadata (-m dup) 00:11:13.041 - enabled no-holes (-O no-holes) 00:11:13.041 - enabled free-space-tree (-R free-space-tree) 00:11:13.041 00:11:13.041 Label: (null) 00:11:13.041 UUID: 0e2ee589-a217-4c05-81f7-5fd78ef7a20e 00:11:13.041 Node size: 16384 00:11:13.041 Sector size: 4096 (CPU page size: 4096) 00:11:13.041 Filesystem size: 510.00MiB 00:11:13.041 Block group profiles: 00:11:13.041 Data: single 8.00MiB 00:11:13.041 Metadata: DUP 32.00MiB 00:11:13.041 System: DUP 8.00MiB 00:11:13.041 SSD detected: yes 00:11:13.041 Zoned device: no 00:11:13.041 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:13.041 Checksum: crc32c 00:11:13.041 Number of devices: 1 00:11:13.041 Devices: 00:11:13.041 ID SIZE PATH 00:11:13.041 1 510.00MiB /dev/nvme0n1p1 00:11:13.041 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72338 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.041 ************************************ 00:11:13.041 END TEST filesystem_in_capsule_btrfs 00:11:13.041 ************************************ 00:11:13.041 00:11:13.041 real 0m0.322s 00:11:13.041 user 0m0.035s 00:11:13.041 sys 0m0.093s 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.041 ************************************ 00:11:13.041 START TEST filesystem_in_capsule_xfs 00:11:13.041 ************************************ 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:13.041 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:13.299 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:13.299 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:13.299 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:13.299 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:13.299 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:13.299 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:13.299 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:13.299 = sectsz=512 attr=2, projid32bit=1 00:11:13.299 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:13.299 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:13.299 data = bsize=4096 blocks=130560, imaxpct=25 00:11:13.299 = sunit=0 swidth=0 blks 00:11:13.299 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:13.299 log =internal log bsize=4096 blocks=16384, version=2 00:11:13.299 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:13.299 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:13.868 Discarding blocks...Done. 00:11:13.868 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:13.869 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72338 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.774 ************************************ 00:11:15.774 END TEST filesystem_in_capsule_xfs 00:11:15.774 ************************************ 00:11:15.774 00:11:15.774 real 0m2.664s 00:11:15.774 user 0m0.029s 00:11:15.774 sys 0m0.061s 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.774 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72338 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 72338 ']' 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 72338 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72338 00:11:16.033 killing process with pid 72338 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72338' 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 72338 00:11:16.033 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 72338 00:11:16.600 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:16.600 00:11:16.601 real 0m14.467s 00:11:16.601 user 0m54.304s 00:11:16.601 sys 0m3.336s 00:11:16.601 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.601 ************************************ 00:11:16.601 END TEST nvmf_filesystem_in_capsule 00:11:16.601 ************************************ 00:11:16.601 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.601 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:16.601 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.601 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.860 rmmod nvme_tcp 00:11:16.860 rmmod nvme_fabrics 00:11:16.860 rmmod nvme_keyring 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:16.860 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:11:17.119 00:11:17.119 real 0m31.311s 00:11:17.119 user 1m51.537s 00:11:17.119 sys 0m7.698s 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.119 ************************************ 00:11:17.119 END TEST nvmf_filesystem 00:11:17.119 ************************************ 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.119 15:19:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.378 ************************************ 00:11:17.378 START TEST nvmf_target_discovery 00:11:17.378 ************************************ 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:17.378 * Looking for test storage... 00:11:17.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.378 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.379 --rc genhtml_branch_coverage=1 00:11:17.379 --rc genhtml_function_coverage=1 00:11:17.379 --rc genhtml_legend=1 00:11:17.379 --rc geninfo_all_blocks=1 00:11:17.379 --rc geninfo_unexecuted_blocks=1 00:11:17.379 00:11:17.379 ' 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.379 --rc genhtml_branch_coverage=1 00:11:17.379 --rc genhtml_function_coverage=1 00:11:17.379 --rc genhtml_legend=1 00:11:17.379 --rc geninfo_all_blocks=1 00:11:17.379 --rc geninfo_unexecuted_blocks=1 00:11:17.379 00:11:17.379 ' 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.379 --rc genhtml_branch_coverage=1 00:11:17.379 --rc genhtml_function_coverage=1 00:11:17.379 --rc genhtml_legend=1 00:11:17.379 --rc geninfo_all_blocks=1 00:11:17.379 --rc geninfo_unexecuted_blocks=1 00:11:17.379 00:11:17.379 ' 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.379 --rc genhtml_branch_coverage=1 00:11:17.379 --rc genhtml_function_coverage=1 00:11:17.379 --rc genhtml_legend=1 00:11:17.379 --rc geninfo_all_blocks=1 00:11:17.379 --rc geninfo_unexecuted_blocks=1 00:11:17.379 00:11:17.379 ' 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.379 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.642 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.643 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:17.643 Cannot find device "nvmf_init_br" 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:17.643 Cannot find device "nvmf_init_br2" 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:17.643 Cannot find device "nvmf_tgt_br" 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.643 Cannot find device "nvmf_tgt_br2" 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:17.643 Cannot find device "nvmf_init_br" 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:11:17.643 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:17.644 Cannot find device "nvmf_init_br2" 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:17.644 Cannot find device "nvmf_tgt_br" 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:17.644 Cannot find device "nvmf_tgt_br2" 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:17.644 Cannot find device "nvmf_br" 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:17.644 Cannot find device "nvmf_init_if" 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:17.644 Cannot find device "nvmf_init_if2" 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:17.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:17.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:11:17.644 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:17.908 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:18.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.177 ms 00:11:18.168 00:11:18.168 --- 10.0.0.3 ping statistics --- 00:11:18.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.168 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:18.168 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:18.168 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:11:18.168 00:11:18.168 --- 10.0.0.4 ping statistics --- 00:11:18.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.168 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:11:18.168 00:11:18.168 --- 10.0.0.1 ping statistics --- 00:11:18.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.168 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:18.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:11:18.168 00:11:18.168 --- 10.0.0.2 ping statistics --- 00:11:18.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.168 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72938 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72938 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72938 ']' 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.168 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.168 [2024-11-20 15:19:04.548994] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:18.168 [2024-11-20 15:19:04.549334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.427 [2024-11-20 15:19:04.716714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.427 [2024-11-20 15:19:04.802966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.427 [2024-11-20 15:19:04.803253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.427 [2024-11-20 15:19:04.803435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.427 [2024-11-20 15:19:04.803485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.427 [2024-11-20 15:19:04.803540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.427 [2024-11-20 15:19:04.805020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.427 [2024-11-20 15:19:04.805238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.427 [2024-11-20 15:19:04.805418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.427 [2024-11-20 15:19:04.805423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.364 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.364 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:19.364 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 [2024-11-20 15:19:05.627012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 Null1 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 [2024-11-20 15:19:05.691208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 Null2 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 Null3 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 Null4 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.365 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.624 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.624 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:11:19.624 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.624 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.624 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.624 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 4420 00:11:19.624 00:11:19.624 Discovery Log Number of Records 6, Generation counter 6 00:11:19.624 =====Discovery Log Entry 0====== 00:11:19.624 trtype: tcp 00:11:19.624 adrfam: ipv4 00:11:19.624 subtype: current discovery subsystem 00:11:19.624 treq: not required 00:11:19.624 portid: 0 00:11:19.624 trsvcid: 4420 00:11:19.624 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.624 traddr: 10.0.0.3 00:11:19.624 eflags: explicit discovery connections, duplicate discovery information 00:11:19.625 sectype: none 00:11:19.625 =====Discovery Log Entry 1====== 00:11:19.625 trtype: tcp 00:11:19.625 adrfam: ipv4 00:11:19.625 subtype: nvme subsystem 00:11:19.625 treq: not required 00:11:19.625 portid: 0 00:11:19.625 trsvcid: 4420 00:11:19.625 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:19.625 traddr: 10.0.0.3 00:11:19.625 eflags: none 00:11:19.625 sectype: none 00:11:19.625 =====Discovery Log Entry 2====== 00:11:19.625 trtype: tcp 00:11:19.625 adrfam: ipv4 00:11:19.625 subtype: nvme subsystem 00:11:19.625 treq: not required 00:11:19.625 portid: 0 00:11:19.625 trsvcid: 4420 00:11:19.625 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:19.625 traddr: 10.0.0.3 00:11:19.625 eflags: none 00:11:19.625 sectype: none 00:11:19.625 =====Discovery Log Entry 3====== 00:11:19.625 trtype: tcp 00:11:19.625 adrfam: ipv4 00:11:19.625 subtype: nvme subsystem 00:11:19.625 treq: not required 00:11:19.625 portid: 0 00:11:19.625 trsvcid: 4420 00:11:19.625 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:19.625 traddr: 10.0.0.3 00:11:19.625 eflags: none 00:11:19.625 sectype: none 00:11:19.625 =====Discovery Log Entry 4====== 00:11:19.625 trtype: tcp 00:11:19.625 adrfam: ipv4 00:11:19.625 subtype: nvme subsystem 00:11:19.625 treq: not required 00:11:19.625 portid: 0 00:11:19.625 trsvcid: 4420 00:11:19.625 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:19.625 traddr: 10.0.0.3 00:11:19.625 eflags: none 00:11:19.625 sectype: none 00:11:19.625 =====Discovery Log Entry 5====== 00:11:19.625 trtype: tcp 00:11:19.625 adrfam: ipv4 00:11:19.625 subtype: discovery subsystem referral 00:11:19.625 treq: not required 00:11:19.625 portid: 0 00:11:19.625 trsvcid: 4430 00:11:19.625 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.625 traddr: 10.0.0.3 00:11:19.625 eflags: none 00:11:19.625 sectype: none 00:11:19.625 Perform nvmf subsystem discovery via RPC 00:11:19.625 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:19.625 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:19.625 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.625 [ 00:11:19.625 { 00:11:19.625 "allow_any_host": true, 00:11:19.625 "hosts": [], 00:11:19.625 "listen_addresses": [ 00:11:19.625 { 00:11:19.625 "adrfam": "IPv4", 00:11:19.625 "traddr": "10.0.0.3", 00:11:19.625 "trsvcid": "4420", 00:11:19.625 "trtype": "TCP" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:19.625 "subtype": "Discovery" 00:11:19.625 }, 00:11:19.625 { 00:11:19.625 "allow_any_host": true, 00:11:19.625 "hosts": [], 00:11:19.625 "listen_addresses": [ 00:11:19.625 { 00:11:19.625 "adrfam": "IPv4", 00:11:19.625 "traddr": "10.0.0.3", 00:11:19.625 "trsvcid": "4420", 00:11:19.625 "trtype": "TCP" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "max_cntlid": 65519, 00:11:19.625 "max_namespaces": 32, 00:11:19.625 "min_cntlid": 1, 00:11:19.625 "model_number": "SPDK bdev Controller", 00:11:19.625 "namespaces": [ 00:11:19.625 { 00:11:19.625 "bdev_name": "Null1", 00:11:19.625 "name": "Null1", 00:11:19.625 "nguid": "19D9C9B1DA1C4810BB724CAD8CD44228", 00:11:19.625 "nsid": 1, 00:11:19.625 "uuid": "19d9c9b1-da1c-4810-bb72-4cad8cd44228" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.625 "serial_number": "SPDK00000000000001", 00:11:19.625 "subtype": "NVMe" 00:11:19.625 }, 00:11:19.625 { 00:11:19.625 "allow_any_host": true, 00:11:19.625 "hosts": [], 00:11:19.625 "listen_addresses": [ 00:11:19.625 { 00:11:19.625 "adrfam": "IPv4", 00:11:19.625 "traddr": "10.0.0.3", 00:11:19.625 "trsvcid": "4420", 00:11:19.625 "trtype": "TCP" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "max_cntlid": 65519, 00:11:19.625 "max_namespaces": 32, 00:11:19.625 "min_cntlid": 1, 00:11:19.625 "model_number": "SPDK bdev Controller", 00:11:19.625 "namespaces": [ 00:11:19.625 { 00:11:19.625 "bdev_name": "Null2", 00:11:19.625 "name": "Null2", 00:11:19.625 "nguid": "9F8D1F8975B941959C1F550FFD0C691F", 00:11:19.625 "nsid": 1, 00:11:19.625 "uuid": "9f8d1f89-75b9-4195-9c1f-550ffd0c691f" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:19.625 "serial_number": "SPDK00000000000002", 00:11:19.625 "subtype": "NVMe" 00:11:19.625 }, 00:11:19.625 { 00:11:19.625 "allow_any_host": true, 00:11:19.625 "hosts": [], 00:11:19.625 "listen_addresses": [ 00:11:19.625 { 00:11:19.625 "adrfam": "IPv4", 00:11:19.625 "traddr": "10.0.0.3", 00:11:19.625 "trsvcid": "4420", 00:11:19.625 "trtype": "TCP" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "max_cntlid": 65519, 00:11:19.625 "max_namespaces": 32, 00:11:19.625 "min_cntlid": 1, 00:11:19.625 "model_number": "SPDK bdev Controller", 00:11:19.625 "namespaces": [ 00:11:19.625 { 00:11:19.625 "bdev_name": "Null3", 00:11:19.625 "name": "Null3", 00:11:19.625 "nguid": "131EFBA8C8254F3A8067EFAB483BBF84", 00:11:19.625 "nsid": 1, 00:11:19.625 "uuid": "131efba8-c825-4f3a-8067-efab483bbf84" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:19.625 "serial_number": "SPDK00000000000003", 00:11:19.625 "subtype": "NVMe" 00:11:19.625 }, 00:11:19.625 { 00:11:19.625 "allow_any_host": true, 00:11:19.625 "hosts": [], 00:11:19.625 "listen_addresses": [ 00:11:19.625 { 00:11:19.625 "adrfam": "IPv4", 00:11:19.625 "traddr": "10.0.0.3", 00:11:19.625 "trsvcid": "4420", 00:11:19.625 "trtype": "TCP" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "max_cntlid": 65519, 00:11:19.625 "max_namespaces": 32, 00:11:19.625 "min_cntlid": 1, 00:11:19.625 "model_number": "SPDK bdev Controller", 00:11:19.625 "namespaces": [ 00:11:19.625 { 00:11:19.625 "bdev_name": "Null4", 00:11:19.625 "name": "Null4", 00:11:19.625 "nguid": "374EF04451694D1FBA2591E7F16B964D", 00:11:19.625 "nsid": 1, 00:11:19.625 "uuid": "374ef044-5169-4d1f-ba25-91e7f16b964d" 00:11:19.625 } 00:11:19.625 ], 00:11:19.625 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:19.625 "serial_number": "SPDK00000000000004", 00:11:19.625 "subtype": "NVMe" 00:11:19.625 } 00:11:19.625 ] 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.625 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.884 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.885 rmmod nvme_tcp 00:11:19.885 rmmod nvme_fabrics 00:11:19.885 rmmod nvme_keyring 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72938 ']' 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72938 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72938 ']' 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72938 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72938 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.885 killing process with pid 72938 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72938' 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72938 00:11:19.885 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72938 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.454 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.713 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:11:20.713 00:11:20.713 real 0m3.372s 00:11:20.713 user 0m7.783s 00:11:20.713 sys 0m1.120s 00:11:20.713 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.713 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.713 ************************************ 00:11:20.713 END TEST nvmf_target_discovery 00:11:20.713 ************************************ 00:11:20.713 15:19:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:20.713 15:19:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.713 15:19:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.713 15:19:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.713 ************************************ 00:11:20.713 START TEST nvmf_referrals 00:11:20.713 ************************************ 00:11:20.713 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:20.713 * Looking for test storage... 00:11:20.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:20.713 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:20.713 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:20.713 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.973 --rc genhtml_branch_coverage=1 00:11:20.973 --rc genhtml_function_coverage=1 00:11:20.973 --rc genhtml_legend=1 00:11:20.973 --rc geninfo_all_blocks=1 00:11:20.973 --rc geninfo_unexecuted_blocks=1 00:11:20.973 00:11:20.973 ' 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.973 --rc genhtml_branch_coverage=1 00:11:20.973 --rc genhtml_function_coverage=1 00:11:20.973 --rc genhtml_legend=1 00:11:20.973 --rc geninfo_all_blocks=1 00:11:20.973 --rc geninfo_unexecuted_blocks=1 00:11:20.973 00:11:20.973 ' 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.973 --rc genhtml_branch_coverage=1 00:11:20.973 --rc genhtml_function_coverage=1 00:11:20.973 --rc genhtml_legend=1 00:11:20.973 --rc geninfo_all_blocks=1 00:11:20.973 --rc geninfo_unexecuted_blocks=1 00:11:20.973 00:11:20.973 ' 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.973 --rc genhtml_branch_coverage=1 00:11:20.973 --rc genhtml_function_coverage=1 00:11:20.973 --rc genhtml_legend=1 00:11:20.973 --rc geninfo_all_blocks=1 00:11:20.973 --rc geninfo_unexecuted_blocks=1 00:11:20.973 00:11:20.973 ' 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.973 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.974 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:20.974 Cannot find device "nvmf_init_br" 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:20.974 Cannot find device "nvmf_init_br2" 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:20.974 Cannot find device "nvmf_tgt_br" 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.974 Cannot find device "nvmf_tgt_br2" 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:11:20.974 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:21.234 Cannot find device "nvmf_init_br" 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:21.234 Cannot find device "nvmf_init_br2" 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:21.234 Cannot find device "nvmf_tgt_br" 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:21.234 Cannot find device "nvmf_tgt_br2" 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:21.234 Cannot find device "nvmf_br" 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:21.234 Cannot find device "nvmf_init_if" 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:21.234 Cannot find device "nvmf_init_if2" 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.234 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:21.493 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.493 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 00:11:21.493 00:11:21.493 --- 10.0.0.3 ping statistics --- 00:11:21.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.493 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:21.493 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:21.493 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:21.494 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.121 ms 00:11:21.494 00:11:21.494 --- 10.0.0.4 ping statistics --- 00:11:21.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.494 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:21.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:11:21.494 00:11:21.494 --- 10.0.0.1 ping statistics --- 00:11:21.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.494 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:21.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:11:21.494 00:11:21.494 --- 10.0.0.2 ping statistics --- 00:11:21.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.494 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=73221 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 73221 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 73221 ']' 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.494 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.753 [2024-11-20 15:19:08.013001] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:21.753 [2024-11-20 15:19:08.013121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.753 [2024-11-20 15:19:08.169597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.011 [2024-11-20 15:19:08.254385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.011 [2024-11-20 15:19:08.254464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.011 [2024-11-20 15:19:08.254475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.011 [2024-11-20 15:19:08.254483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.011 [2024-11-20 15:19:08.254490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.011 [2024-11-20 15:19:08.255910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.011 [2024-11-20 15:19:08.256086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.011 [2024-11-20 15:19:08.256000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.011 [2024-11-20 15:19:08.256091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.579 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.579 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:22.579 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.579 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.579 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.579 [2024-11-20 15:19:09.029232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.579 [2024-11-20 15:19:09.045452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.579 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.580 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.580 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:22.580 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.580 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.842 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.101 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:23.359 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.619 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:23.619 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:23.879 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:24.138 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.398 rmmod nvme_tcp 00:11:24.398 rmmod nvme_fabrics 00:11:24.398 rmmod nvme_keyring 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 73221 ']' 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 73221 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 73221 ']' 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 73221 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73221 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.398 killing process with pid 73221 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73221' 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 73221 00:11:24.398 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 73221 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:24.657 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.916 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:11:25.175 00:11:25.175 real 0m4.362s 00:11:25.175 user 0m12.107s 00:11:25.175 sys 0m1.477s 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.175 ************************************ 00:11:25.175 END TEST nvmf_referrals 00:11:25.175 ************************************ 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.175 ************************************ 00:11:25.175 START TEST nvmf_connect_disconnect 00:11:25.175 ************************************ 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:25.175 * Looking for test storage... 00:11:25.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:25.175 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:25.434 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:25.434 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.434 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.434 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.434 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.434 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.434 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:25.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.435 --rc genhtml_branch_coverage=1 00:11:25.435 --rc genhtml_function_coverage=1 00:11:25.435 --rc genhtml_legend=1 00:11:25.435 --rc geninfo_all_blocks=1 00:11:25.435 --rc geninfo_unexecuted_blocks=1 00:11:25.435 00:11:25.435 ' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:25.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.435 --rc genhtml_branch_coverage=1 00:11:25.435 --rc genhtml_function_coverage=1 00:11:25.435 --rc genhtml_legend=1 00:11:25.435 --rc geninfo_all_blocks=1 00:11:25.435 --rc geninfo_unexecuted_blocks=1 00:11:25.435 00:11:25.435 ' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:25.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.435 --rc genhtml_branch_coverage=1 00:11:25.435 --rc genhtml_function_coverage=1 00:11:25.435 --rc genhtml_legend=1 00:11:25.435 --rc geninfo_all_blocks=1 00:11:25.435 --rc geninfo_unexecuted_blocks=1 00:11:25.435 00:11:25.435 ' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:25.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.435 --rc genhtml_branch_coverage=1 00:11:25.435 --rc genhtml_function_coverage=1 00:11:25.435 --rc genhtml_legend=1 00:11:25.435 --rc geninfo_all_blocks=1 00:11:25.435 --rc geninfo_unexecuted_blocks=1 00:11:25.435 00:11:25.435 ' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.435 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:25.435 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:25.436 Cannot find device "nvmf_init_br" 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:25.436 Cannot find device "nvmf_init_br2" 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:25.436 Cannot find device "nvmf_tgt_br" 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:25.436 Cannot find device "nvmf_tgt_br2" 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:25.436 Cannot find device "nvmf_init_br" 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:25.436 Cannot find device "nvmf_init_br2" 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:25.436 Cannot find device "nvmf_tgt_br" 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:25.436 Cannot find device "nvmf_tgt_br2" 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:11:25.436 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:25.695 Cannot find device "nvmf_br" 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:25.695 Cannot find device "nvmf_init_if" 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:25.695 Cannot find device "nvmf_init_if2" 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:25.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:25.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:25.695 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:25.695 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:25.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:25.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.140 ms 00:11:25.954 00:11:25.954 --- 10.0.0.3 ping statistics --- 00:11:25.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.954 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:25.954 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:25.954 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.107 ms 00:11:25.954 00:11:25.954 --- 10.0.0.4 ping statistics --- 00:11:25.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.954 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:25.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:11:25.954 00:11:25.954 --- 10.0.0.1 ping statistics --- 00:11:25.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.954 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:25.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:11:25.954 00:11:25.954 --- 10.0.0.2 ping statistics --- 00:11:25.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.954 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=73586 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 73586 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 73586 ']' 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.954 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.955 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.955 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.955 [2024-11-20 15:19:12.410982] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:25.955 [2024-11-20 15:19:12.411091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.213 [2024-11-20 15:19:12.567865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.213 [2024-11-20 15:19:12.659009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.213 [2024-11-20 15:19:12.659145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.213 [2024-11-20 15:19:12.659162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.213 [2024-11-20 15:19:12.659178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.213 [2024-11-20 15:19:12.659190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.213 [2024-11-20 15:19:12.661143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.213 [2024-11-20 15:19:12.661263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.213 [2024-11-20 15:19:12.661440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.213 [2024-11-20 15:19:12.661427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.153 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.153 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:27.153 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.153 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.153 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:27.153 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.153 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:27.153 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 [2024-11-20 15:19:13.439828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 [2024-11-20 15:19:13.528354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:27.154 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:29.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.563 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:38.563 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:38.563 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.563 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.822 rmmod nvme_tcp 00:11:38.822 rmmod nvme_fabrics 00:11:38.822 rmmod nvme_keyring 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 73586 ']' 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 73586 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 73586 ']' 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 73586 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73586 00:11:38.822 killing process with pid 73586 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73586' 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 73586 00:11:38.822 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 73586 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:39.082 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.341 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.601 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:11:39.601 00:11:39.601 real 0m14.354s 00:11:39.601 user 0m49.531s 00:11:39.601 sys 0m3.288s 00:11:39.601 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.601 ************************************ 00:11:39.601 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:39.601 END TEST nvmf_connect_disconnect 00:11:39.601 ************************************ 00:11:39.601 15:19:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:39.601 15:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.601 15:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.601 15:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.601 ************************************ 00:11:39.601 START TEST nvmf_multitarget 00:11:39.601 ************************************ 00:11:39.601 15:19:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:39.601 * Looking for test storage... 00:11:39.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.601 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.601 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.601 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.913 --rc genhtml_branch_coverage=1 00:11:39.913 --rc genhtml_function_coverage=1 00:11:39.913 --rc genhtml_legend=1 00:11:39.913 --rc geninfo_all_blocks=1 00:11:39.913 --rc geninfo_unexecuted_blocks=1 00:11:39.913 00:11:39.913 ' 00:11:39.913 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.913 --rc genhtml_branch_coverage=1 00:11:39.914 --rc genhtml_function_coverage=1 00:11:39.914 --rc genhtml_legend=1 00:11:39.914 --rc geninfo_all_blocks=1 00:11:39.914 --rc geninfo_unexecuted_blocks=1 00:11:39.914 00:11:39.914 ' 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.914 --rc genhtml_branch_coverage=1 00:11:39.914 --rc genhtml_function_coverage=1 00:11:39.914 --rc genhtml_legend=1 00:11:39.914 --rc geninfo_all_blocks=1 00:11:39.914 --rc geninfo_unexecuted_blocks=1 00:11:39.914 00:11:39.914 ' 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.914 --rc genhtml_branch_coverage=1 00:11:39.914 --rc genhtml_function_coverage=1 00:11:39.914 --rc genhtml_legend=1 00:11:39.914 --rc geninfo_all_blocks=1 00:11:39.914 --rc geninfo_unexecuted_blocks=1 00:11:39.914 00:11:39.914 ' 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.914 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:39.915 Cannot find device "nvmf_init_br" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:39.915 Cannot find device "nvmf_init_br2" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:39.915 Cannot find device "nvmf_tgt_br" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:39.915 Cannot find device "nvmf_tgt_br2" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:39.915 Cannot find device "nvmf_init_br" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:39.915 Cannot find device "nvmf_init_br2" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:39.915 Cannot find device "nvmf_tgt_br" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:39.915 Cannot find device "nvmf_tgt_br2" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:39.915 Cannot find device "nvmf_br" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:39.915 Cannot find device "nvmf_init_if" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:39.915 Cannot find device "nvmf_init_if2" 00:11:39.915 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:40.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:40.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:40.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:40.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:11:40.187 00:11:40.187 --- 10.0.0.3 ping statistics --- 00:11:40.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.187 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:40.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:40.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:11:40.187 00:11:40.187 --- 10.0.0.4 ping statistics --- 00:11:40.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.187 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:40.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:40.187 00:11:40.187 --- 10.0.0.1 ping statistics --- 00:11:40.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.187 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:40.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:11:40.187 00:11:40.187 --- 10.0.0.2 ping statistics --- 00:11:40.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.187 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:40.187 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=74049 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 74049 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 74049 ']' 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.446 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:40.446 [2024-11-20 15:19:26.756279] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:40.446 [2024-11-20 15:19:26.756370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.446 [2024-11-20 15:19:26.909094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.706 [2024-11-20 15:19:26.991491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.706 [2024-11-20 15:19:26.991554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.706 [2024-11-20 15:19:26.991564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.706 [2024-11-20 15:19:26.991573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.706 [2024-11-20 15:19:26.991580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.706 [2024-11-20 15:19:26.992896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.706 [2024-11-20 15:19:26.993016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.706 [2024-11-20 15:19:26.993102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.706 [2024-11-20 15:19:26.993106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.276 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.276 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:41.276 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.276 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.276 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.536 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:41.536 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:41.536 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:41.536 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:41.536 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:41.536 "nvmf_tgt_1" 00:11:41.536 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:41.795 "nvmf_tgt_2" 00:11:41.795 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:41.795 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:41.795 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:41.795 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:42.053 true 00:11:42.053 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:42.053 true 00:11:42.053 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:42.053 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.313 rmmod nvme_tcp 00:11:42.313 rmmod nvme_fabrics 00:11:42.313 rmmod nvme_keyring 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 74049 ']' 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 74049 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 74049 ']' 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 74049 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74049 00:11:42.313 killing process with pid 74049 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74049' 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 74049 00:11:42.313 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 74049 00:11:42.572 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.572 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.572 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.572 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.831 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:11:43.090 00:11:43.090 real 0m3.448s 00:11:43.090 user 0m9.326s 00:11:43.090 sys 0m1.116s 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.090 ************************************ 00:11:43.090 END TEST nvmf_multitarget 00:11:43.090 ************************************ 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.090 15:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.090 ************************************ 00:11:43.090 START TEST nvmf_rpc 00:11:43.090 ************************************ 00:11:43.091 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:43.091 * Looking for test storage... 00:11:43.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:43.091 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.350 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:43.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.351 --rc genhtml_branch_coverage=1 00:11:43.351 --rc genhtml_function_coverage=1 00:11:43.351 --rc genhtml_legend=1 00:11:43.351 --rc geninfo_all_blocks=1 00:11:43.351 --rc geninfo_unexecuted_blocks=1 00:11:43.351 00:11:43.351 ' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:43.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.351 --rc genhtml_branch_coverage=1 00:11:43.351 --rc genhtml_function_coverage=1 00:11:43.351 --rc genhtml_legend=1 00:11:43.351 --rc geninfo_all_blocks=1 00:11:43.351 --rc geninfo_unexecuted_blocks=1 00:11:43.351 00:11:43.351 ' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:43.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.351 --rc genhtml_branch_coverage=1 00:11:43.351 --rc genhtml_function_coverage=1 00:11:43.351 --rc genhtml_legend=1 00:11:43.351 --rc geninfo_all_blocks=1 00:11:43.351 --rc geninfo_unexecuted_blocks=1 00:11:43.351 00:11:43.351 ' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:43.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.351 --rc genhtml_branch_coverage=1 00:11:43.351 --rc genhtml_function_coverage=1 00:11:43.351 --rc genhtml_legend=1 00:11:43.351 --rc geninfo_all_blocks=1 00:11:43.351 --rc geninfo_unexecuted_blocks=1 00:11:43.351 00:11:43.351 ' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.351 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:43.351 Cannot find device "nvmf_init_br" 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:43.351 Cannot find device "nvmf_init_br2" 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:11:43.351 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:43.352 Cannot find device "nvmf_tgt_br" 00:11:43.352 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:11:43.352 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.352 Cannot find device "nvmf_tgt_br2" 00:11:43.352 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:11:43.352 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:43.352 Cannot find device "nvmf_init_br" 00:11:43.352 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:11:43.352 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:43.352 Cannot find device "nvmf_init_br2" 00:11:43.352 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:11:43.352 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:43.611 Cannot find device "nvmf_tgt_br" 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:43.611 Cannot find device "nvmf_tgt_br2" 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:43.611 Cannot find device "nvmf_br" 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:43.611 Cannot find device "nvmf_init_if" 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:43.611 Cannot find device "nvmf_init_if2" 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.611 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.611 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:43.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:11:43.870 00:11:43.870 --- 10.0.0.3 ping statistics --- 00:11:43.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.870 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:43.870 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:43.870 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:11:43.870 00:11:43.870 --- 10.0.0.4 ping statistics --- 00:11:43.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.870 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:11:43.870 00:11:43.870 --- 10.0.0.1 ping statistics --- 00:11:43.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.870 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:43.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:11:43.870 00:11:43.870 --- 10.0.0.2 ping statistics --- 00:11:43.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.870 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=74337 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 74337 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 74337 ']' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.870 15:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.870 [2024-11-20 15:19:30.280625] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:11:43.870 [2024-11-20 15:19:30.280725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.129 [2024-11-20 15:19:30.432824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.129 [2024-11-20 15:19:30.515533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.129 [2024-11-20 15:19:30.515598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.129 [2024-11-20 15:19:30.515608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.129 [2024-11-20 15:19:30.515617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.129 [2024-11-20 15:19:30.515624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.129 [2024-11-20 15:19:30.517068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.129 [2024-11-20 15:19:30.517307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.129 [2024-11-20 15:19:30.517453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.129 [2024-11-20 15:19:30.517367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.696 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.696 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:44.696 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.696 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.696 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:44.955 "poll_groups": [ 00:11:44.955 { 00:11:44.955 "admin_qpairs": 0, 00:11:44.955 "completed_nvme_io": 0, 00:11:44.955 "current_admin_qpairs": 0, 00:11:44.955 "current_io_qpairs": 0, 00:11:44.955 "io_qpairs": 0, 00:11:44.955 "name": "nvmf_tgt_poll_group_000", 00:11:44.955 "pending_bdev_io": 0, 00:11:44.955 "transports": [] 00:11:44.955 }, 00:11:44.955 { 00:11:44.955 "admin_qpairs": 0, 00:11:44.955 "completed_nvme_io": 0, 00:11:44.955 "current_admin_qpairs": 0, 00:11:44.955 "current_io_qpairs": 0, 00:11:44.955 "io_qpairs": 0, 00:11:44.955 "name": "nvmf_tgt_poll_group_001", 00:11:44.955 "pending_bdev_io": 0, 00:11:44.955 "transports": [] 00:11:44.955 }, 00:11:44.955 { 00:11:44.955 "admin_qpairs": 0, 00:11:44.955 "completed_nvme_io": 0, 00:11:44.955 "current_admin_qpairs": 0, 00:11:44.955 "current_io_qpairs": 0, 00:11:44.955 "io_qpairs": 0, 00:11:44.955 "name": "nvmf_tgt_poll_group_002", 00:11:44.955 "pending_bdev_io": 0, 00:11:44.955 "transports": [] 00:11:44.955 }, 00:11:44.955 { 00:11:44.955 "admin_qpairs": 0, 00:11:44.955 "completed_nvme_io": 0, 00:11:44.955 "current_admin_qpairs": 0, 00:11:44.955 "current_io_qpairs": 0, 00:11:44.955 "io_qpairs": 0, 00:11:44.955 "name": "nvmf_tgt_poll_group_003", 00:11:44.955 "pending_bdev_io": 0, 00:11:44.955 "transports": [] 00:11:44.955 } 00:11:44.955 ], 00:11:44.955 "tick_rate": 2490000000 00:11:44.955 }' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.955 [2024-11-20 15:19:31.360876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:44.955 "poll_groups": [ 00:11:44.955 { 00:11:44.955 "admin_qpairs": 0, 00:11:44.955 "completed_nvme_io": 0, 00:11:44.955 "current_admin_qpairs": 0, 00:11:44.955 "current_io_qpairs": 0, 00:11:44.955 "io_qpairs": 0, 00:11:44.955 "name": "nvmf_tgt_poll_group_000", 00:11:44.955 "pending_bdev_io": 0, 00:11:44.955 "transports": [ 00:11:44.955 { 00:11:44.955 "trtype": "TCP" 00:11:44.955 } 00:11:44.955 ] 00:11:44.955 }, 00:11:44.955 { 00:11:44.955 "admin_qpairs": 0, 00:11:44.955 "completed_nvme_io": 0, 00:11:44.955 "current_admin_qpairs": 0, 00:11:44.955 "current_io_qpairs": 0, 00:11:44.955 "io_qpairs": 0, 00:11:44.955 "name": "nvmf_tgt_poll_group_001", 00:11:44.955 "pending_bdev_io": 0, 00:11:44.955 "transports": [ 00:11:44.955 { 00:11:44.955 "trtype": "TCP" 00:11:44.955 } 00:11:44.955 ] 00:11:44.955 }, 00:11:44.955 { 00:11:44.955 "admin_qpairs": 0, 00:11:44.955 "completed_nvme_io": 0, 00:11:44.955 "current_admin_qpairs": 0, 00:11:44.955 "current_io_qpairs": 0, 00:11:44.955 "io_qpairs": 0, 00:11:44.955 "name": "nvmf_tgt_poll_group_002", 00:11:44.955 "pending_bdev_io": 0, 00:11:44.955 "transports": [ 00:11:44.955 { 00:11:44.955 "trtype": "TCP" 00:11:44.955 } 00:11:44.955 ] 00:11:44.955 }, 00:11:44.955 { 00:11:44.955 "admin_qpairs": 0, 00:11:44.955 "completed_nvme_io": 0, 00:11:44.955 "current_admin_qpairs": 0, 00:11:44.955 "current_io_qpairs": 0, 00:11:44.955 "io_qpairs": 0, 00:11:44.955 "name": "nvmf_tgt_poll_group_003", 00:11:44.955 "pending_bdev_io": 0, 00:11:44.955 "transports": [ 00:11:44.955 { 00:11:44.955 "trtype": "TCP" 00:11:44.955 } 00:11:44.955 ] 00:11:44.955 } 00:11:44.955 ], 00:11:44.955 "tick_rate": 2490000000 00:11:44.955 }' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:44.955 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.214 Malloc1 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.214 [2024-11-20 15:19:31.575905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:45.214 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -a 10.0.0.3 -s 4420 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -a 10.0.0.3 -s 4420 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -a 10.0.0.3 -s 4420 00:11:45.215 [2024-11-20 15:19:31.612313] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105' 00:11:45.215 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:45.215 could not add new controller: failed to write to nvme-fabrics device 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.215 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:45.474 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.474 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:45.474 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.474 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:45.474 15:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:47.376 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:47.376 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.376 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:47.376 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:47.376 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.376 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:47.376 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.634 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.634 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:47.634 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:47.634 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:47.635 [2024-11-20 15:19:33.969632] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105' 00:11:47.635 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:47.635 could not add new controller: failed to write to nvme-fabrics device 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.635 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:47.893 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.893 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.893 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.893 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.893 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.797 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.056 [2024-11-20 15:19:36.323640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.056 15:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.622 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 [2024-11-20 15:19:38.659191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:52.623 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.528 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.788 [2024-11-20 15:19:41.019170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.788 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 [2024-11-20 15:19:43.382959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.320 15:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.223 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.482 [2024-11-20 15:19:45.758990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.482 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:59.740 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.740 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:59.740 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.740 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:59.740 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:01.682 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:01.682 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:01.682 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.682 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:01.682 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.682 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:01.682 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.682 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.683 [2024-11-20 15:19:48.142829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.683 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 [2024-11-20 15:19:48.214821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 [2024-11-20 15:19:48.282709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.941 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 [2024-11-20 15:19:48.354661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.942 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 [2024-11-20 15:19:48.418613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.200 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:02.200 "poll_groups": [ 00:12:02.200 { 00:12:02.200 "admin_qpairs": 2, 00:12:02.200 "completed_nvme_io": 117, 00:12:02.200 "current_admin_qpairs": 0, 00:12:02.200 "current_io_qpairs": 0, 00:12:02.200 "io_qpairs": 16, 00:12:02.200 "name": "nvmf_tgt_poll_group_000", 00:12:02.200 "pending_bdev_io": 0, 00:12:02.200 "transports": [ 00:12:02.200 { 00:12:02.200 "trtype": "TCP" 00:12:02.200 } 00:12:02.200 ] 00:12:02.200 }, 00:12:02.201 { 00:12:02.201 "admin_qpairs": 3, 00:12:02.201 "completed_nvme_io": 67, 00:12:02.201 "current_admin_qpairs": 0, 00:12:02.201 "current_io_qpairs": 0, 00:12:02.201 "io_qpairs": 17, 00:12:02.201 "name": "nvmf_tgt_poll_group_001", 00:12:02.201 "pending_bdev_io": 0, 00:12:02.201 "transports": [ 00:12:02.201 { 00:12:02.201 "trtype": "TCP" 00:12:02.201 } 00:12:02.201 ] 00:12:02.201 }, 00:12:02.201 { 00:12:02.201 "admin_qpairs": 1, 00:12:02.201 "completed_nvme_io": 118, 00:12:02.201 "current_admin_qpairs": 0, 00:12:02.201 "current_io_qpairs": 0, 00:12:02.201 "io_qpairs": 19, 00:12:02.201 "name": "nvmf_tgt_poll_group_002", 00:12:02.201 "pending_bdev_io": 0, 00:12:02.201 "transports": [ 00:12:02.201 { 00:12:02.201 "trtype": "TCP" 00:12:02.201 } 00:12:02.201 ] 00:12:02.201 }, 00:12:02.201 { 00:12:02.201 "admin_qpairs": 1, 00:12:02.201 "completed_nvme_io": 118, 00:12:02.201 "current_admin_qpairs": 0, 00:12:02.201 "current_io_qpairs": 0, 00:12:02.201 "io_qpairs": 18, 00:12:02.201 "name": "nvmf_tgt_poll_group_003", 00:12:02.201 "pending_bdev_io": 0, 00:12:02.201 "transports": [ 00:12:02.201 { 00:12:02.201 "trtype": "TCP" 00:12:02.201 } 00:12:02.201 ] 00:12:02.201 } 00:12:02.201 ], 00:12:02.201 "tick_rate": 2490000000 00:12:02.201 }' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.201 rmmod nvme_tcp 00:12:02.201 rmmod nvme_fabrics 00:12:02.201 rmmod nvme_keyring 00:12:02.201 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 74337 ']' 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 74337 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 74337 ']' 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 74337 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74337 00:12:02.459 killing process with pid 74337 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74337' 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 74337 00:12:02.459 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 74337 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:02.718 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:12:02.976 00:12:02.976 real 0m19.933s 00:12:02.976 user 1m12.047s 00:12:02.976 sys 0m4.275s 00:12:02.976 ************************************ 00:12:02.976 END TEST nvmf_rpc 00:12:02.976 ************************************ 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.976 ************************************ 00:12:02.976 START TEST nvmf_invalid 00:12:02.976 ************************************ 00:12:02.976 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:03.235 * Looking for test storage... 00:12:03.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:03.235 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.235 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.235 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.235 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.236 --rc genhtml_branch_coverage=1 00:12:03.236 --rc genhtml_function_coverage=1 00:12:03.236 --rc genhtml_legend=1 00:12:03.236 --rc geninfo_all_blocks=1 00:12:03.236 --rc geninfo_unexecuted_blocks=1 00:12:03.236 00:12:03.236 ' 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.236 --rc genhtml_branch_coverage=1 00:12:03.236 --rc genhtml_function_coverage=1 00:12:03.236 --rc genhtml_legend=1 00:12:03.236 --rc geninfo_all_blocks=1 00:12:03.236 --rc geninfo_unexecuted_blocks=1 00:12:03.236 00:12:03.236 ' 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.236 --rc genhtml_branch_coverage=1 00:12:03.236 --rc genhtml_function_coverage=1 00:12:03.236 --rc genhtml_legend=1 00:12:03.236 --rc geninfo_all_blocks=1 00:12:03.236 --rc geninfo_unexecuted_blocks=1 00:12:03.236 00:12:03.236 ' 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.236 --rc genhtml_branch_coverage=1 00:12:03.236 --rc genhtml_function_coverage=1 00:12:03.236 --rc genhtml_legend=1 00:12:03.236 --rc geninfo_all_blocks=1 00:12:03.236 --rc geninfo_unexecuted_blocks=1 00:12:03.236 00:12:03.236 ' 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.236 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:03.496 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.496 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.496 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.496 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.496 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.497 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:03.497 Cannot find device "nvmf_init_br" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:03.497 Cannot find device "nvmf_init_br2" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:03.497 Cannot find device "nvmf_tgt_br" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.497 Cannot find device "nvmf_tgt_br2" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:03.497 Cannot find device "nvmf_init_br" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:03.497 Cannot find device "nvmf_init_br2" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:03.497 Cannot find device "nvmf_tgt_br" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:03.497 Cannot find device "nvmf_tgt_br2" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:03.497 Cannot find device "nvmf_br" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:03.497 Cannot find device "nvmf_init_if" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:03.497 Cannot find device "nvmf_init_if2" 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:03.497 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:03.757 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:03.757 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:04.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:04.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.140 ms 00:12:04.070 00:12:04.070 --- 10.0.0.3 ping statistics --- 00:12:04.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.070 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:04.070 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:04.070 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.126 ms 00:12:04.070 00:12:04.070 --- 10.0.0.4 ping statistics --- 00:12:04.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.070 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:04.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:12:04.070 00:12:04.070 --- 10.0.0.1 ping statistics --- 00:12:04.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.070 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:04.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:12:04.070 00:12:04.070 --- 10.0.0.2 ping statistics --- 00:12:04.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.070 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74906 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74906 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74906 ']' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.070 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:04.070 [2024-11-20 15:19:50.384468] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:04.070 [2024-11-20 15:19:50.384583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.070 [2024-11-20 15:19:50.525763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.330 [2024-11-20 15:19:50.608845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.330 [2024-11-20 15:19:50.608906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.330 [2024-11-20 15:19:50.608917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.330 [2024-11-20 15:19:50.608926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.330 [2024-11-20 15:19:50.608933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.330 [2024-11-20 15:19:50.610286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.330 [2024-11-20 15:19:50.610448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.330 [2024-11-20 15:19:50.610450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.330 [2024-11-20 15:19:50.610369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.898 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.898 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:04.898 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.898 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.898 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:05.158 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.158 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:05.158 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30059 00:12:05.158 [2024-11-20 15:19:51.600654] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:05.158 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/20 15:19:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30059 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:05.158 request: 00:12:05.158 { 00:12:05.158 "method": "nvmf_create_subsystem", 00:12:05.158 "params": { 00:12:05.158 "nqn": "nqn.2016-06.io.spdk:cnode30059", 00:12:05.158 "tgt_name": "foobar" 00:12:05.158 } 00:12:05.158 } 00:12:05.158 Got JSON-RPC error response 00:12:05.158 GoRPCClient: error on JSON-RPC call' 00:12:05.158 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/20 15:19:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30059 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:05.158 request: 00:12:05.158 { 00:12:05.158 "method": "nvmf_create_subsystem", 00:12:05.158 "params": { 00:12:05.158 "nqn": "nqn.2016-06.io.spdk:cnode30059", 00:12:05.158 "tgt_name": "foobar" 00:12:05.158 } 00:12:05.158 } 00:12:05.158 Got JSON-RPC error response 00:12:05.158 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:05.158 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:05.158 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31868 00:12:05.417 [2024-11-20 15:19:51.840547] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31868: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:05.417 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/20 15:19:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31868 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:05.417 request: 00:12:05.417 { 00:12:05.417 "method": "nvmf_create_subsystem", 00:12:05.417 "params": { 00:12:05.417 "nqn": "nqn.2016-06.io.spdk:cnode31868", 00:12:05.417 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:05.417 } 00:12:05.417 } 00:12:05.417 Got JSON-RPC error response 00:12:05.417 GoRPCClient: error on JSON-RPC call' 00:12:05.417 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/20 15:19:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31868 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:05.417 request: 00:12:05.417 { 00:12:05.417 "method": "nvmf_create_subsystem", 00:12:05.417 "params": { 00:12:05.417 "nqn": "nqn.2016-06.io.spdk:cnode31868", 00:12:05.417 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:05.417 } 00:12:05.417 } 00:12:05.417 Got JSON-RPC error response 00:12:05.417 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:05.417 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:05.417 15:19:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32738 00:12:05.678 [2024-11-20 15:19:52.084354] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32738: invalid model number 'SPDK_Controller' 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/20 15:19:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode32738], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:05.678 request: 00:12:05.678 { 00:12:05.678 "method": "nvmf_create_subsystem", 00:12:05.678 "params": { 00:12:05.678 "nqn": "nqn.2016-06.io.spdk:cnode32738", 00:12:05.678 "model_number": "SPDK_Controller\u001f" 00:12:05.678 } 00:12:05.678 } 00:12:05.678 Got JSON-RPC error response 00:12:05.678 GoRPCClient: error on JSON-RPC call' 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/20 15:19:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode32738], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:05.678 request: 00:12:05.678 { 00:12:05.678 "method": "nvmf_create_subsystem", 00:12:05.678 "params": { 00:12:05.678 "nqn": "nqn.2016-06.io.spdk:cnode32738", 00:12:05.678 "model_number": "SPDK_Controller\u001f" 00:12:05.678 } 00:12:05.678 } 00:12:05.678 Got JSON-RPC error response 00:12:05.678 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.678 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.938 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$-6sj/)#5(~mn)_w'\''~`jR' 00:12:05.939 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '$-6sj/)#5(~mn)_w'\''~`jR' nqn.2016-06.io.spdk:cnode6351 00:12:06.199 [2024-11-20 15:19:52.508015] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6351: invalid serial number '$-6sj/)#5(~mn)_w'~`jR' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/20 15:19:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6351 serial_number:$-6sj/)#5(~mn)_w'\''~`jR], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN $-6sj/)#5(~mn)_w'\''~`jR 00:12:06.199 request: 00:12:06.199 { 00:12:06.199 "method": "nvmf_create_subsystem", 00:12:06.199 "params": { 00:12:06.199 "nqn": "nqn.2016-06.io.spdk:cnode6351", 00:12:06.199 "serial_number": "$-6sj/)#5(~mn)_w'\''~`jR" 00:12:06.199 } 00:12:06.199 } 00:12:06.199 Got JSON-RPC error response 00:12:06.199 GoRPCClient: error on JSON-RPC call' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/20 15:19:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6351 serial_number:$-6sj/)#5(~mn)_w'~`jR], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN $-6sj/)#5(~mn)_w'~`jR 00:12:06.199 request: 00:12:06.199 { 00:12:06.199 "method": "nvmf_create_subsystem", 00:12:06.199 "params": { 00:12:06.199 "nqn": "nqn.2016-06.io.spdk:cnode6351", 00:12:06.199 "serial_number": "$-6sj/)#5(~mn)_w'~`jR" 00:12:06.199 } 00:12:06.199 } 00:12:06.199 Got JSON-RPC error response 00:12:06.199 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:06.199 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.200 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.459 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '3"EXf8bB4"B4>/>:hm[u"dcZ2`KKbEiyV'\''qV#0sf<' 00:12:06.460 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '3"EXf8bB4"B4>/>:hm[u"dcZ2`KKbEiyV'\''qV#0sf<' nqn.2016-06.io.spdk:cnode13285 00:12:06.720 [2024-11-20 15:19:53.099646] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13285: invalid model number '3"EXf8bB4"B4>/>:hm[u"dcZ2`KKbEiyV'qV#0sf<' 00:12:06.720 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/20 15:19:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:3"EXf8bB4"B4>/>:hm[u"dcZ2`KKbEiyV'\''qV#0sf< nqn:nqn.2016-06.io.spdk:cnode13285], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 3"EXf8bB4"B4>/>:hm[u"dcZ2`KKbEiyV'\''qV#0sf< 00:12:06.720 request: 00:12:06.720 { 00:12:06.720 "method": "nvmf_create_subsystem", 00:12:06.720 "params": { 00:12:06.720 "nqn": "nqn.2016-06.io.spdk:cnode13285", 00:12:06.720 "model_number": "3\"EXf8bB4\"B4>/>:hm[u\"dcZ2`KKbEiyV'\''qV#0sf<" 00:12:06.720 } 00:12:06.720 } 00:12:06.720 Got JSON-RPC error response 00:12:06.720 GoRPCClient: error on JSON-RPC call' 00:12:06.720 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/20 15:19:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:3"EXf8bB4"B4>/>:hm[u"dcZ2`KKbEiyV'qV#0sf< nqn:nqn.2016-06.io.spdk:cnode13285], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 3"EXf8bB4"B4>/>:hm[u"dcZ2`KKbEiyV'qV#0sf< 00:12:06.720 request: 00:12:06.720 { 00:12:06.720 "method": "nvmf_create_subsystem", 00:12:06.720 "params": { 00:12:06.720 "nqn": "nqn.2016-06.io.spdk:cnode13285", 00:12:06.720 "model_number": "3\"EXf8bB4\"B4>/>:hm[u\"dcZ2`KKbEiyV'qV#0sf<" 00:12:06.720 } 00:12:06.720 } 00:12:06.720 Got JSON-RPC error response 00:12:06.720 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:06.720 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:06.979 [2024-11-20 15:19:53.335536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.979 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:07.238 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:07.238 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:07.238 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:07.238 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:07.238 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:07.497 [2024-11-20 15:19:53.851190] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:07.497 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/11/20 15:19:53 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:07.497 request: 00:12:07.497 { 00:12:07.497 "method": "nvmf_subsystem_remove_listener", 00:12:07.497 "params": { 00:12:07.497 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:07.497 "listen_address": { 00:12:07.497 "trtype": "tcp", 00:12:07.497 "traddr": "", 00:12:07.497 "trsvcid": "4421" 00:12:07.497 } 00:12:07.497 } 00:12:07.497 } 00:12:07.497 Got JSON-RPC error response 00:12:07.497 GoRPCClient: error on JSON-RPC call' 00:12:07.497 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/11/20 15:19:53 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:07.497 request: 00:12:07.497 { 00:12:07.497 "method": "nvmf_subsystem_remove_listener", 00:12:07.497 "params": { 00:12:07.497 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:07.497 "listen_address": { 00:12:07.497 "trtype": "tcp", 00:12:07.497 "traddr": "", 00:12:07.497 "trsvcid": "4421" 00:12:07.497 } 00:12:07.497 } 00:12:07.497 } 00:12:07.497 Got JSON-RPC error response 00:12:07.497 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:07.497 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7508 -i 0 00:12:07.768 [2024-11-20 15:19:54.078979] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7508: invalid cntlid range [0-65519] 00:12:07.768 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7508], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:07.768 request: 00:12:07.768 { 00:12:07.768 "method": "nvmf_create_subsystem", 00:12:07.768 "params": { 00:12:07.768 "nqn": "nqn.2016-06.io.spdk:cnode7508", 00:12:07.768 "min_cntlid": 0 00:12:07.768 } 00:12:07.768 } 00:12:07.768 Got JSON-RPC error response 00:12:07.768 GoRPCClient: error on JSON-RPC call' 00:12:07.768 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7508], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:07.768 request: 00:12:07.768 { 00:12:07.768 "method": "nvmf_create_subsystem", 00:12:07.768 "params": { 00:12:07.768 "nqn": "nqn.2016-06.io.spdk:cnode7508", 00:12:07.768 "min_cntlid": 0 00:12:07.768 } 00:12:07.768 } 00:12:07.768 Got JSON-RPC error response 00:12:07.768 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:07.768 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20637 -i 65520 00:12:08.059 [2024-11-20 15:19:54.303847] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20637: invalid cntlid range [65520-65519] 00:12:08.059 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20637], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:08.059 request: 00:12:08.059 { 00:12:08.059 "method": "nvmf_create_subsystem", 00:12:08.059 "params": { 00:12:08.059 "nqn": "nqn.2016-06.io.spdk:cnode20637", 00:12:08.059 "min_cntlid": 65520 00:12:08.059 } 00:12:08.059 } 00:12:08.059 Got JSON-RPC error response 00:12:08.059 GoRPCClient: error on JSON-RPC call' 00:12:08.059 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20637], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:08.059 request: 00:12:08.059 { 00:12:08.059 "method": "nvmf_create_subsystem", 00:12:08.059 "params": { 00:12:08.059 "nqn": "nqn.2016-06.io.spdk:cnode20637", 00:12:08.059 "min_cntlid": 65520 00:12:08.059 } 00:12:08.059 } 00:12:08.059 Got JSON-RPC error response 00:12:08.059 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.059 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21078 -I 0 00:12:08.059 [2024-11-20 15:19:54.531620] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21078: invalid cntlid range [1-0] 00:12:08.318 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21078], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:08.318 request: 00:12:08.318 { 00:12:08.318 "method": "nvmf_create_subsystem", 00:12:08.318 "params": { 00:12:08.318 "nqn": "nqn.2016-06.io.spdk:cnode21078", 00:12:08.318 "max_cntlid": 0 00:12:08.318 } 00:12:08.318 } 00:12:08.318 Got JSON-RPC error response 00:12:08.318 GoRPCClient: error on JSON-RPC call' 00:12:08.318 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21078], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:08.318 request: 00:12:08.318 { 00:12:08.318 "method": "nvmf_create_subsystem", 00:12:08.318 "params": { 00:12:08.318 "nqn": "nqn.2016-06.io.spdk:cnode21078", 00:12:08.318 "max_cntlid": 0 00:12:08.318 } 00:12:08.318 } 00:12:08.318 Got JSON-RPC error response 00:12:08.318 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.318 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6472 -I 65520 00:12:08.318 [2024-11-20 15:19:54.767461] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6472: invalid cntlid range [1-65520] 00:12:08.318 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode6472], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:08.318 request: 00:12:08.318 { 00:12:08.318 "method": "nvmf_create_subsystem", 00:12:08.318 "params": { 00:12:08.318 "nqn": "nqn.2016-06.io.spdk:cnode6472", 00:12:08.318 "max_cntlid": 65520 00:12:08.318 } 00:12:08.318 } 00:12:08.318 Got JSON-RPC error response 00:12:08.318 GoRPCClient: error on JSON-RPC call' 00:12:08.318 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode6472], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:08.318 request: 00:12:08.318 { 00:12:08.318 "method": "nvmf_create_subsystem", 00:12:08.318 "params": { 00:12:08.318 "nqn": "nqn.2016-06.io.spdk:cnode6472", 00:12:08.318 "max_cntlid": 65520 00:12:08.318 } 00:12:08.318 } 00:12:08.318 Got JSON-RPC error response 00:12:08.318 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.318 15:19:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24996 -i 6 -I 5 00:12:08.577 [2024-11-20 15:19:54.991308] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24996: invalid cntlid range [6-5] 00:12:08.577 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode24996], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:08.577 request: 00:12:08.577 { 00:12:08.577 "method": "nvmf_create_subsystem", 00:12:08.577 "params": { 00:12:08.577 "nqn": "nqn.2016-06.io.spdk:cnode24996", 00:12:08.577 "min_cntlid": 6, 00:12:08.577 "max_cntlid": 5 00:12:08.577 } 00:12:08.577 } 00:12:08.577 Got JSON-RPC error response 00:12:08.577 GoRPCClient: error on JSON-RPC call' 00:12:08.577 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/11/20 15:19:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode24996], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:08.577 request: 00:12:08.577 { 00:12:08.577 "method": "nvmf_create_subsystem", 00:12:08.577 "params": { 00:12:08.577 "nqn": "nqn.2016-06.io.spdk:cnode24996", 00:12:08.577 "min_cntlid": 6, 00:12:08.577 "max_cntlid": 5 00:12:08.577 } 00:12:08.577 } 00:12:08.577 Got JSON-RPC error response 00:12:08.577 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.578 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:08.837 { 00:12:08.837 "name": "foobar", 00:12:08.837 "method": "nvmf_delete_target", 00:12:08.837 "req_id": 1 00:12:08.837 } 00:12:08.837 Got JSON-RPC error response 00:12:08.837 response: 00:12:08.837 { 00:12:08.837 "code": -32602, 00:12:08.837 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:08.837 }' 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:08.837 { 00:12:08.837 "name": "foobar", 00:12:08.837 "method": "nvmf_delete_target", 00:12:08.837 "req_id": 1 00:12:08.837 } 00:12:08.837 Got JSON-RPC error response 00:12:08.837 response: 00:12:08.837 { 00:12:08.837 "code": -32602, 00:12:08.837 "message": "The specified target doesn't exist, cannot delete it." 00:12:08.837 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.837 rmmod nvme_tcp 00:12:08.837 rmmod nvme_fabrics 00:12:08.837 rmmod nvme_keyring 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 74906 ']' 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 74906 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 74906 ']' 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 74906 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.837 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74906 00:12:09.096 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.096 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.096 killing process with pid 74906 00:12:09.096 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74906' 00:12:09.096 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 74906 00:12:09.096 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 74906 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:09.355 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:12:09.614 00:12:09.614 real 0m6.481s 00:12:09.614 user 0m22.961s 00:12:09.614 sys 0m2.021s 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.614 ************************************ 00:12:09.614 END TEST nvmf_invalid 00:12:09.614 ************************************ 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.614 ************************************ 00:12:09.614 START TEST nvmf_connect_stress 00:12:09.614 ************************************ 00:12:09.614 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:09.876 * Looking for test storage... 00:12:09.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.876 --rc genhtml_branch_coverage=1 00:12:09.876 --rc genhtml_function_coverage=1 00:12:09.876 --rc genhtml_legend=1 00:12:09.876 --rc geninfo_all_blocks=1 00:12:09.876 --rc geninfo_unexecuted_blocks=1 00:12:09.876 00:12:09.876 ' 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.876 --rc genhtml_branch_coverage=1 00:12:09.876 --rc genhtml_function_coverage=1 00:12:09.876 --rc genhtml_legend=1 00:12:09.876 --rc geninfo_all_blocks=1 00:12:09.876 --rc geninfo_unexecuted_blocks=1 00:12:09.876 00:12:09.876 ' 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.876 --rc genhtml_branch_coverage=1 00:12:09.876 --rc genhtml_function_coverage=1 00:12:09.876 --rc genhtml_legend=1 00:12:09.876 --rc geninfo_all_blocks=1 00:12:09.876 --rc geninfo_unexecuted_blocks=1 00:12:09.876 00:12:09.876 ' 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.876 --rc genhtml_branch_coverage=1 00:12:09.876 --rc genhtml_function_coverage=1 00:12:09.876 --rc genhtml_legend=1 00:12:09.876 --rc geninfo_all_blocks=1 00:12:09.876 --rc geninfo_unexecuted_blocks=1 00:12:09.876 00:12:09.876 ' 00:12:09.876 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.877 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:09.877 Cannot find device "nvmf_init_br" 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:09.877 Cannot find device "nvmf_init_br2" 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:09.877 Cannot find device "nvmf_tgt_br" 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.877 Cannot find device "nvmf_tgt_br2" 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:12:09.877 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:10.137 Cannot find device "nvmf_init_br" 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:10.137 Cannot find device "nvmf_init_br2" 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:10.137 Cannot find device "nvmf_tgt_br" 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:10.137 Cannot find device "nvmf_tgt_br2" 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:10.137 Cannot find device "nvmf_br" 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:10.137 Cannot find device "nvmf_init_if" 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:10.137 Cannot find device "nvmf_init_if2" 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:10.137 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:10.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:10.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:10.397 00:12:10.397 --- 10.0.0.3 ping statistics --- 00:12:10.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.397 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:10.397 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:10.397 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:12:10.397 00:12:10.397 --- 10.0.0.4 ping statistics --- 00:12:10.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.397 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:10.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:10.397 00:12:10.397 --- 10.0.0.1 ping statistics --- 00:12:10.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.397 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:10.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:12:10.397 00:12:10.397 --- 10.0.0.2 ping statistics --- 00:12:10.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.397 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=75478 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 75478 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 75478 ']' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.397 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.397 [2024-11-20 15:19:56.810735] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:10.397 [2024-11-20 15:19:56.810812] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.657 [2024-11-20 15:19:56.952041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:10.657 [2024-11-20 15:19:57.030418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.657 [2024-11-20 15:19:57.030478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.657 [2024-11-20 15:19:57.030489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.657 [2024-11-20 15:19:57.030498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.657 [2024-11-20 15:19:57.030505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.657 [2024-11-20 15:19:57.031807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.657 [2024-11-20 15:19:57.031890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.657 [2024-11-20 15:19:57.031895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.609 [2024-11-20 15:19:57.780636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.609 [2024-11-20 15:19:57.812776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.609 NULL1 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75535 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.609 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.610 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.869 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.869 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:11.869 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.869 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.869 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.129 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.129 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:12.129 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.129 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.129 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.695 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.695 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:12.695 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.695 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.695 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.952 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.952 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:12.952 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.952 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.952 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.210 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.210 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:13.210 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.210 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.210 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.468 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.468 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:13.468 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.468 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.468 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.034 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.034 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:14.034 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.034 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.034 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.292 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.292 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:14.292 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.292 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.292 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.549 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.549 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:14.550 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.550 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.550 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.808 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.808 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:14.808 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.808 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.808 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.066 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.066 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:15.066 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.066 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.066 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.632 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.632 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:15.632 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.632 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.632 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.889 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.889 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:15.890 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.890 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.890 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.220 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.220 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:16.220 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.220 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.220 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.490 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:16.490 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.490 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.490 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.748 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.748 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:16.748 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.748 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.748 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.005 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.005 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:17.005 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.005 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.005 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.569 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.569 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:17.569 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.569 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.569 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.827 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.827 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:17.827 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.827 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.827 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.085 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.085 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:18.085 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.085 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.085 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.344 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.344 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:18.344 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.344 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.344 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.911 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.911 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:18.911 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.911 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.911 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.169 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.169 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:19.169 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.169 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.169 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.475 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.475 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:19.475 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.475 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.475 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.041 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.041 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:20.041 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.041 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.041 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.299 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.299 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:20.299 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.299 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.299 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.557 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.557 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:20.557 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.557 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.557 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.824 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.824 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:20.824 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.824 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.824 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.095 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.095 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:21.095 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.095 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.095 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.660 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:21.660 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.660 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.661 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.661 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75535 00:12:21.919 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75535) - No such process 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75535 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.919 rmmod nvme_tcp 00:12:21.919 rmmod nvme_fabrics 00:12:21.919 rmmod nvme_keyring 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 75478 ']' 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 75478 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 75478 ']' 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 75478 00:12:21.919 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:22.178 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.178 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75478 00:12:22.178 killing process with pid 75478 00:12:22.178 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:22.178 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:22.178 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75478' 00:12:22.178 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 75478 00:12:22.178 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 75478 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:22.438 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:22.698 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:22.698 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:22.698 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:22.698 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.698 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.698 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.698 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:12:22.698 00:12:22.698 real 0m13.029s 00:12:22.698 user 0m36.419s 00:12:22.698 sys 0m6.063s 00:12:22.698 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.698 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.698 ************************************ 00:12:22.698 END TEST nvmf_connect_stress 00:12:22.698 ************************************ 00:12:22.698 15:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:22.698 15:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.698 15:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.698 15:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:22.698 ************************************ 00:12:22.698 START TEST nvmf_fused_ordering 00:12:22.698 ************************************ 00:12:22.698 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:22.957 * Looking for test storage... 00:12:22.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:22.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.957 --rc genhtml_branch_coverage=1 00:12:22.957 --rc genhtml_function_coverage=1 00:12:22.957 --rc genhtml_legend=1 00:12:22.957 --rc geninfo_all_blocks=1 00:12:22.957 --rc geninfo_unexecuted_blocks=1 00:12:22.957 00:12:22.957 ' 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:22.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.957 --rc genhtml_branch_coverage=1 00:12:22.957 --rc genhtml_function_coverage=1 00:12:22.957 --rc genhtml_legend=1 00:12:22.957 --rc geninfo_all_blocks=1 00:12:22.957 --rc geninfo_unexecuted_blocks=1 00:12:22.957 00:12:22.957 ' 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:22.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.957 --rc genhtml_branch_coverage=1 00:12:22.957 --rc genhtml_function_coverage=1 00:12:22.957 --rc genhtml_legend=1 00:12:22.957 --rc geninfo_all_blocks=1 00:12:22.957 --rc geninfo_unexecuted_blocks=1 00:12:22.957 00:12:22.957 ' 00:12:22.957 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:22.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.957 --rc genhtml_branch_coverage=1 00:12:22.957 --rc genhtml_function_coverage=1 00:12:22.957 --rc genhtml_legend=1 00:12:22.957 --rc geninfo_all_blocks=1 00:12:22.957 --rc geninfo_unexecuted_blocks=1 00:12:22.957 00:12:22.957 ' 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.958 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:22.958 Cannot find device "nvmf_init_br" 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:12:22.958 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:23.217 Cannot find device "nvmf_init_br2" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:23.217 Cannot find device "nvmf_tgt_br" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:23.217 Cannot find device "nvmf_tgt_br2" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:23.217 Cannot find device "nvmf_init_br" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:23.217 Cannot find device "nvmf_init_br2" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:23.217 Cannot find device "nvmf_tgt_br" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:23.217 Cannot find device "nvmf_tgt_br2" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:23.217 Cannot find device "nvmf_br" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:23.217 Cannot find device "nvmf_init_if" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:23.217 Cannot find device "nvmf_init_if2" 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:23.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:12:23.217 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:23.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:23.218 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:12:23.218 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:23.218 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:23.218 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:23.218 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:23.218 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:23.218 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:23.477 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:23.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:23.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:23.478 00:12:23.478 --- 10.0.0.3 ping statistics --- 00:12:23.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.478 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:23.478 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:23.478 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:12:23.478 00:12:23.478 --- 10.0.0.4 ping statistics --- 00:12:23.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.478 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:23.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:12:23.478 00:12:23.478 --- 10.0.0.1 ping statistics --- 00:12:23.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.478 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:23.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:23.478 00:12:23.478 --- 10.0.0.2 ping statistics --- 00:12:23.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.478 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75920 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:23.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75920 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75920 ']' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.478 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.478 [2024-11-20 15:20:09.954371] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:23.478 [2024-11-20 15:20:09.954658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.737 [2024-11-20 15:20:10.108175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.737 [2024-11-20 15:20:10.177361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.737 [2024-11-20 15:20:10.177427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.737 [2024-11-20 15:20:10.177438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.738 [2024-11-20 15:20:10.177446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.738 [2024-11-20 15:20:10.177454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.738 [2024-11-20 15:20:10.177849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.674 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.674 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:24.674 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.674 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.674 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.674 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.675 [2024-11-20 15:20:10.914449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.675 [2024-11-20 15:20:10.938551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.675 NULL1 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.675 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:24.675 [2024-11-20 15:20:11.008566] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:24.675 [2024-11-20 15:20:11.008618] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75970 ] 00:12:24.934 Attached to nqn.2016-06.io.spdk:cnode1 00:12:24.934 Namespace ID: 1 size: 1GB 00:12:24.934 fused_ordering(0) 00:12:24.934 fused_ordering(1) 00:12:24.934 fused_ordering(2) 00:12:24.934 fused_ordering(3) 00:12:24.934 fused_ordering(4) 00:12:24.934 fused_ordering(5) 00:12:24.934 fused_ordering(6) 00:12:24.934 fused_ordering(7) 00:12:24.934 fused_ordering(8) 00:12:24.934 fused_ordering(9) 00:12:24.934 fused_ordering(10) 00:12:24.934 fused_ordering(11) 00:12:24.934 fused_ordering(12) 00:12:24.934 fused_ordering(13) 00:12:24.934 fused_ordering(14) 00:12:24.934 fused_ordering(15) 00:12:24.934 fused_ordering(16) 00:12:24.934 fused_ordering(17) 00:12:24.934 fused_ordering(18) 00:12:24.934 fused_ordering(19) 00:12:24.934 fused_ordering(20) 00:12:24.934 fused_ordering(21) 00:12:24.934 fused_ordering(22) 00:12:24.934 fused_ordering(23) 00:12:24.934 fused_ordering(24) 00:12:24.934 fused_ordering(25) 00:12:24.934 fused_ordering(26) 00:12:24.934 fused_ordering(27) 00:12:24.934 fused_ordering(28) 00:12:24.934 fused_ordering(29) 00:12:24.934 fused_ordering(30) 00:12:24.934 fused_ordering(31) 00:12:24.934 fused_ordering(32) 00:12:24.934 fused_ordering(33) 00:12:24.934 fused_ordering(34) 00:12:24.934 fused_ordering(35) 00:12:24.934 fused_ordering(36) 00:12:24.934 fused_ordering(37) 00:12:24.934 fused_ordering(38) 00:12:24.934 fused_ordering(39) 00:12:24.934 fused_ordering(40) 00:12:24.934 fused_ordering(41) 00:12:24.934 fused_ordering(42) 00:12:24.934 fused_ordering(43) 00:12:24.934 fused_ordering(44) 00:12:24.934 fused_ordering(45) 00:12:24.934 fused_ordering(46) 00:12:24.934 fused_ordering(47) 00:12:24.934 fused_ordering(48) 00:12:24.934 fused_ordering(49) 00:12:24.934 fused_ordering(50) 00:12:24.934 fused_ordering(51) 00:12:24.934 fused_ordering(52) 00:12:24.934 fused_ordering(53) 00:12:24.934 fused_ordering(54) 00:12:24.934 fused_ordering(55) 00:12:24.934 fused_ordering(56) 00:12:24.934 fused_ordering(57) 00:12:24.934 fused_ordering(58) 00:12:24.934 fused_ordering(59) 00:12:24.934 fused_ordering(60) 00:12:24.934 fused_ordering(61) 00:12:24.934 fused_ordering(62) 00:12:24.934 fused_ordering(63) 00:12:24.934 fused_ordering(64) 00:12:24.934 fused_ordering(65) 00:12:24.934 fused_ordering(66) 00:12:24.934 fused_ordering(67) 00:12:24.934 fused_ordering(68) 00:12:24.934 fused_ordering(69) 00:12:24.934 fused_ordering(70) 00:12:24.934 fused_ordering(71) 00:12:24.934 fused_ordering(72) 00:12:24.934 fused_ordering(73) 00:12:24.934 fused_ordering(74) 00:12:24.934 fused_ordering(75) 00:12:24.934 fused_ordering(76) 00:12:24.934 fused_ordering(77) 00:12:24.934 fused_ordering(78) 00:12:24.934 fused_ordering(79) 00:12:24.934 fused_ordering(80) 00:12:24.934 fused_ordering(81) 00:12:24.934 fused_ordering(82) 00:12:24.934 fused_ordering(83) 00:12:24.934 fused_ordering(84) 00:12:24.934 fused_ordering(85) 00:12:24.934 fused_ordering(86) 00:12:24.934 fused_ordering(87) 00:12:24.934 fused_ordering(88) 00:12:24.934 fused_ordering(89) 00:12:24.934 fused_ordering(90) 00:12:24.934 fused_ordering(91) 00:12:24.934 fused_ordering(92) 00:12:24.934 fused_ordering(93) 00:12:24.934 fused_ordering(94) 00:12:24.934 fused_ordering(95) 00:12:24.934 fused_ordering(96) 00:12:24.934 fused_ordering(97) 00:12:24.934 fused_ordering(98) 00:12:24.934 fused_ordering(99) 00:12:24.934 fused_ordering(100) 00:12:24.934 fused_ordering(101) 00:12:24.934 fused_ordering(102) 00:12:24.934 fused_ordering(103) 00:12:24.934 fused_ordering(104) 00:12:24.934 fused_ordering(105) 00:12:24.934 fused_ordering(106) 00:12:24.934 fused_ordering(107) 00:12:24.934 fused_ordering(108) 00:12:24.934 fused_ordering(109) 00:12:24.934 fused_ordering(110) 00:12:24.934 fused_ordering(111) 00:12:24.934 fused_ordering(112) 00:12:24.934 fused_ordering(113) 00:12:24.934 fused_ordering(114) 00:12:24.934 fused_ordering(115) 00:12:24.934 fused_ordering(116) 00:12:24.934 fused_ordering(117) 00:12:24.934 fused_ordering(118) 00:12:24.934 fused_ordering(119) 00:12:24.934 fused_ordering(120) 00:12:24.934 fused_ordering(121) 00:12:24.934 fused_ordering(122) 00:12:24.934 fused_ordering(123) 00:12:24.934 fused_ordering(124) 00:12:24.934 fused_ordering(125) 00:12:24.934 fused_ordering(126) 00:12:24.934 fused_ordering(127) 00:12:24.934 fused_ordering(128) 00:12:24.934 fused_ordering(129) 00:12:24.934 fused_ordering(130) 00:12:24.934 fused_ordering(131) 00:12:24.934 fused_ordering(132) 00:12:24.934 fused_ordering(133) 00:12:24.934 fused_ordering(134) 00:12:24.934 fused_ordering(135) 00:12:24.934 fused_ordering(136) 00:12:24.934 fused_ordering(137) 00:12:24.934 fused_ordering(138) 00:12:24.934 fused_ordering(139) 00:12:24.934 fused_ordering(140) 00:12:24.934 fused_ordering(141) 00:12:24.934 fused_ordering(142) 00:12:24.934 fused_ordering(143) 00:12:24.934 fused_ordering(144) 00:12:24.934 fused_ordering(145) 00:12:24.934 fused_ordering(146) 00:12:24.934 fused_ordering(147) 00:12:24.934 fused_ordering(148) 00:12:24.934 fused_ordering(149) 00:12:24.934 fused_ordering(150) 00:12:24.934 fused_ordering(151) 00:12:24.934 fused_ordering(152) 00:12:24.934 fused_ordering(153) 00:12:24.934 fused_ordering(154) 00:12:24.934 fused_ordering(155) 00:12:24.934 fused_ordering(156) 00:12:24.934 fused_ordering(157) 00:12:24.934 fused_ordering(158) 00:12:24.934 fused_ordering(159) 00:12:24.934 fused_ordering(160) 00:12:24.934 fused_ordering(161) 00:12:24.934 fused_ordering(162) 00:12:24.934 fused_ordering(163) 00:12:24.934 fused_ordering(164) 00:12:24.934 fused_ordering(165) 00:12:24.934 fused_ordering(166) 00:12:24.934 fused_ordering(167) 00:12:24.934 fused_ordering(168) 00:12:24.934 fused_ordering(169) 00:12:24.934 fused_ordering(170) 00:12:24.934 fused_ordering(171) 00:12:24.934 fused_ordering(172) 00:12:24.934 fused_ordering(173) 00:12:24.934 fused_ordering(174) 00:12:24.934 fused_ordering(175) 00:12:24.934 fused_ordering(176) 00:12:24.934 fused_ordering(177) 00:12:24.934 fused_ordering(178) 00:12:24.934 fused_ordering(179) 00:12:24.934 fused_ordering(180) 00:12:24.934 fused_ordering(181) 00:12:24.934 fused_ordering(182) 00:12:24.934 fused_ordering(183) 00:12:24.934 fused_ordering(184) 00:12:24.934 fused_ordering(185) 00:12:24.934 fused_ordering(186) 00:12:24.934 fused_ordering(187) 00:12:24.934 fused_ordering(188) 00:12:24.934 fused_ordering(189) 00:12:24.934 fused_ordering(190) 00:12:24.934 fused_ordering(191) 00:12:24.934 fused_ordering(192) 00:12:24.934 fused_ordering(193) 00:12:24.934 fused_ordering(194) 00:12:24.934 fused_ordering(195) 00:12:24.934 fused_ordering(196) 00:12:24.934 fused_ordering(197) 00:12:24.934 fused_ordering(198) 00:12:24.934 fused_ordering(199) 00:12:24.934 fused_ordering(200) 00:12:24.934 fused_ordering(201) 00:12:24.934 fused_ordering(202) 00:12:24.934 fused_ordering(203) 00:12:24.934 fused_ordering(204) 00:12:24.934 fused_ordering(205) 00:12:25.194 fused_ordering(206) 00:12:25.194 fused_ordering(207) 00:12:25.194 fused_ordering(208) 00:12:25.194 fused_ordering(209) 00:12:25.194 fused_ordering(210) 00:12:25.194 fused_ordering(211) 00:12:25.194 fused_ordering(212) 00:12:25.194 fused_ordering(213) 00:12:25.194 fused_ordering(214) 00:12:25.194 fused_ordering(215) 00:12:25.194 fused_ordering(216) 00:12:25.194 fused_ordering(217) 00:12:25.194 fused_ordering(218) 00:12:25.194 fused_ordering(219) 00:12:25.194 fused_ordering(220) 00:12:25.194 fused_ordering(221) 00:12:25.194 fused_ordering(222) 00:12:25.194 fused_ordering(223) 00:12:25.194 fused_ordering(224) 00:12:25.194 fused_ordering(225) 00:12:25.194 fused_ordering(226) 00:12:25.194 fused_ordering(227) 00:12:25.194 fused_ordering(228) 00:12:25.194 fused_ordering(229) 00:12:25.194 fused_ordering(230) 00:12:25.194 fused_ordering(231) 00:12:25.194 fused_ordering(232) 00:12:25.194 fused_ordering(233) 00:12:25.194 fused_ordering(234) 00:12:25.194 fused_ordering(235) 00:12:25.194 fused_ordering(236) 00:12:25.194 fused_ordering(237) 00:12:25.194 fused_ordering(238) 00:12:25.194 fused_ordering(239) 00:12:25.194 fused_ordering(240) 00:12:25.194 fused_ordering(241) 00:12:25.194 fused_ordering(242) 00:12:25.194 fused_ordering(243) 00:12:25.194 fused_ordering(244) 00:12:25.194 fused_ordering(245) 00:12:25.194 fused_ordering(246) 00:12:25.194 fused_ordering(247) 00:12:25.194 fused_ordering(248) 00:12:25.194 fused_ordering(249) 00:12:25.194 fused_ordering(250) 00:12:25.194 fused_ordering(251) 00:12:25.194 fused_ordering(252) 00:12:25.194 fused_ordering(253) 00:12:25.194 fused_ordering(254) 00:12:25.194 fused_ordering(255) 00:12:25.194 fused_ordering(256) 00:12:25.194 fused_ordering(257) 00:12:25.194 fused_ordering(258) 00:12:25.194 fused_ordering(259) 00:12:25.194 fused_ordering(260) 00:12:25.194 fused_ordering(261) 00:12:25.194 fused_ordering(262) 00:12:25.194 fused_ordering(263) 00:12:25.194 fused_ordering(264) 00:12:25.194 fused_ordering(265) 00:12:25.194 fused_ordering(266) 00:12:25.194 fused_ordering(267) 00:12:25.194 fused_ordering(268) 00:12:25.194 fused_ordering(269) 00:12:25.194 fused_ordering(270) 00:12:25.194 fused_ordering(271) 00:12:25.194 fused_ordering(272) 00:12:25.194 fused_ordering(273) 00:12:25.194 fused_ordering(274) 00:12:25.194 fused_ordering(275) 00:12:25.194 fused_ordering(276) 00:12:25.194 fused_ordering(277) 00:12:25.194 fused_ordering(278) 00:12:25.194 fused_ordering(279) 00:12:25.194 fused_ordering(280) 00:12:25.194 fused_ordering(281) 00:12:25.194 fused_ordering(282) 00:12:25.194 fused_ordering(283) 00:12:25.194 fused_ordering(284) 00:12:25.194 fused_ordering(285) 00:12:25.194 fused_ordering(286) 00:12:25.194 fused_ordering(287) 00:12:25.194 fused_ordering(288) 00:12:25.194 fused_ordering(289) 00:12:25.194 fused_ordering(290) 00:12:25.194 fused_ordering(291) 00:12:25.194 fused_ordering(292) 00:12:25.194 fused_ordering(293) 00:12:25.194 fused_ordering(294) 00:12:25.194 fused_ordering(295) 00:12:25.194 fused_ordering(296) 00:12:25.194 fused_ordering(297) 00:12:25.194 fused_ordering(298) 00:12:25.194 fused_ordering(299) 00:12:25.194 fused_ordering(300) 00:12:25.194 fused_ordering(301) 00:12:25.194 fused_ordering(302) 00:12:25.194 fused_ordering(303) 00:12:25.194 fused_ordering(304) 00:12:25.194 fused_ordering(305) 00:12:25.194 fused_ordering(306) 00:12:25.194 fused_ordering(307) 00:12:25.194 fused_ordering(308) 00:12:25.194 fused_ordering(309) 00:12:25.194 fused_ordering(310) 00:12:25.194 fused_ordering(311) 00:12:25.194 fused_ordering(312) 00:12:25.194 fused_ordering(313) 00:12:25.194 fused_ordering(314) 00:12:25.194 fused_ordering(315) 00:12:25.194 fused_ordering(316) 00:12:25.194 fused_ordering(317) 00:12:25.194 fused_ordering(318) 00:12:25.194 fused_ordering(319) 00:12:25.194 fused_ordering(320) 00:12:25.194 fused_ordering(321) 00:12:25.194 fused_ordering(322) 00:12:25.194 fused_ordering(323) 00:12:25.194 fused_ordering(324) 00:12:25.194 fused_ordering(325) 00:12:25.194 fused_ordering(326) 00:12:25.194 fused_ordering(327) 00:12:25.194 fused_ordering(328) 00:12:25.194 fused_ordering(329) 00:12:25.194 fused_ordering(330) 00:12:25.194 fused_ordering(331) 00:12:25.194 fused_ordering(332) 00:12:25.194 fused_ordering(333) 00:12:25.194 fused_ordering(334) 00:12:25.194 fused_ordering(335) 00:12:25.194 fused_ordering(336) 00:12:25.194 fused_ordering(337) 00:12:25.194 fused_ordering(338) 00:12:25.194 fused_ordering(339) 00:12:25.194 fused_ordering(340) 00:12:25.194 fused_ordering(341) 00:12:25.194 fused_ordering(342) 00:12:25.194 fused_ordering(343) 00:12:25.194 fused_ordering(344) 00:12:25.194 fused_ordering(345) 00:12:25.194 fused_ordering(346) 00:12:25.194 fused_ordering(347) 00:12:25.194 fused_ordering(348) 00:12:25.194 fused_ordering(349) 00:12:25.194 fused_ordering(350) 00:12:25.194 fused_ordering(351) 00:12:25.194 fused_ordering(352) 00:12:25.194 fused_ordering(353) 00:12:25.194 fused_ordering(354) 00:12:25.195 fused_ordering(355) 00:12:25.195 fused_ordering(356) 00:12:25.195 fused_ordering(357) 00:12:25.195 fused_ordering(358) 00:12:25.195 fused_ordering(359) 00:12:25.195 fused_ordering(360) 00:12:25.195 fused_ordering(361) 00:12:25.195 fused_ordering(362) 00:12:25.195 fused_ordering(363) 00:12:25.195 fused_ordering(364) 00:12:25.195 fused_ordering(365) 00:12:25.195 fused_ordering(366) 00:12:25.195 fused_ordering(367) 00:12:25.195 fused_ordering(368) 00:12:25.195 fused_ordering(369) 00:12:25.195 fused_ordering(370) 00:12:25.195 fused_ordering(371) 00:12:25.195 fused_ordering(372) 00:12:25.195 fused_ordering(373) 00:12:25.195 fused_ordering(374) 00:12:25.195 fused_ordering(375) 00:12:25.195 fused_ordering(376) 00:12:25.195 fused_ordering(377) 00:12:25.195 fused_ordering(378) 00:12:25.195 fused_ordering(379) 00:12:25.195 fused_ordering(380) 00:12:25.195 fused_ordering(381) 00:12:25.195 fused_ordering(382) 00:12:25.195 fused_ordering(383) 00:12:25.195 fused_ordering(384) 00:12:25.195 fused_ordering(385) 00:12:25.195 fused_ordering(386) 00:12:25.195 fused_ordering(387) 00:12:25.195 fused_ordering(388) 00:12:25.195 fused_ordering(389) 00:12:25.195 fused_ordering(390) 00:12:25.195 fused_ordering(391) 00:12:25.195 fused_ordering(392) 00:12:25.195 fused_ordering(393) 00:12:25.195 fused_ordering(394) 00:12:25.195 fused_ordering(395) 00:12:25.195 fused_ordering(396) 00:12:25.195 fused_ordering(397) 00:12:25.195 fused_ordering(398) 00:12:25.195 fused_ordering(399) 00:12:25.195 fused_ordering(400) 00:12:25.195 fused_ordering(401) 00:12:25.195 fused_ordering(402) 00:12:25.195 fused_ordering(403) 00:12:25.195 fused_ordering(404) 00:12:25.195 fused_ordering(405) 00:12:25.195 fused_ordering(406) 00:12:25.195 fused_ordering(407) 00:12:25.195 fused_ordering(408) 00:12:25.195 fused_ordering(409) 00:12:25.195 fused_ordering(410) 00:12:25.454 fused_ordering(411) 00:12:25.454 fused_ordering(412) 00:12:25.454 fused_ordering(413) 00:12:25.454 fused_ordering(414) 00:12:25.454 fused_ordering(415) 00:12:25.454 fused_ordering(416) 00:12:25.454 fused_ordering(417) 00:12:25.454 fused_ordering(418) 00:12:25.454 fused_ordering(419) 00:12:25.454 fused_ordering(420) 00:12:25.454 fused_ordering(421) 00:12:25.454 fused_ordering(422) 00:12:25.454 fused_ordering(423) 00:12:25.454 fused_ordering(424) 00:12:25.454 fused_ordering(425) 00:12:25.454 fused_ordering(426) 00:12:25.454 fused_ordering(427) 00:12:25.454 fused_ordering(428) 00:12:25.454 fused_ordering(429) 00:12:25.454 fused_ordering(430) 00:12:25.454 fused_ordering(431) 00:12:25.454 fused_ordering(432) 00:12:25.454 fused_ordering(433) 00:12:25.454 fused_ordering(434) 00:12:25.454 fused_ordering(435) 00:12:25.454 fused_ordering(436) 00:12:25.454 fused_ordering(437) 00:12:25.454 fused_ordering(438) 00:12:25.454 fused_ordering(439) 00:12:25.454 fused_ordering(440) 00:12:25.454 fused_ordering(441) 00:12:25.454 fused_ordering(442) 00:12:25.454 fused_ordering(443) 00:12:25.454 fused_ordering(444) 00:12:25.454 fused_ordering(445) 00:12:25.454 fused_ordering(446) 00:12:25.454 fused_ordering(447) 00:12:25.454 fused_ordering(448) 00:12:25.454 fused_ordering(449) 00:12:25.454 fused_ordering(450) 00:12:25.454 fused_ordering(451) 00:12:25.454 fused_ordering(452) 00:12:25.454 fused_ordering(453) 00:12:25.454 fused_ordering(454) 00:12:25.454 fused_ordering(455) 00:12:25.454 fused_ordering(456) 00:12:25.454 fused_ordering(457) 00:12:25.454 fused_ordering(458) 00:12:25.454 fused_ordering(459) 00:12:25.454 fused_ordering(460) 00:12:25.454 fused_ordering(461) 00:12:25.454 fused_ordering(462) 00:12:25.454 fused_ordering(463) 00:12:25.454 fused_ordering(464) 00:12:25.454 fused_ordering(465) 00:12:25.454 fused_ordering(466) 00:12:25.454 fused_ordering(467) 00:12:25.454 fused_ordering(468) 00:12:25.455 fused_ordering(469) 00:12:25.455 fused_ordering(470) 00:12:25.455 fused_ordering(471) 00:12:25.455 fused_ordering(472) 00:12:25.455 fused_ordering(473) 00:12:25.455 fused_ordering(474) 00:12:25.455 fused_ordering(475) 00:12:25.455 fused_ordering(476) 00:12:25.455 fused_ordering(477) 00:12:25.455 fused_ordering(478) 00:12:25.455 fused_ordering(479) 00:12:25.455 fused_ordering(480) 00:12:25.455 fused_ordering(481) 00:12:25.455 fused_ordering(482) 00:12:25.455 fused_ordering(483) 00:12:25.455 fused_ordering(484) 00:12:25.455 fused_ordering(485) 00:12:25.455 fused_ordering(486) 00:12:25.455 fused_ordering(487) 00:12:25.455 fused_ordering(488) 00:12:25.455 fused_ordering(489) 00:12:25.455 fused_ordering(490) 00:12:25.455 fused_ordering(491) 00:12:25.455 fused_ordering(492) 00:12:25.455 fused_ordering(493) 00:12:25.455 fused_ordering(494) 00:12:25.455 fused_ordering(495) 00:12:25.455 fused_ordering(496) 00:12:25.455 fused_ordering(497) 00:12:25.455 fused_ordering(498) 00:12:25.455 fused_ordering(499) 00:12:25.455 fused_ordering(500) 00:12:25.455 fused_ordering(501) 00:12:25.455 fused_ordering(502) 00:12:25.455 fused_ordering(503) 00:12:25.455 fused_ordering(504) 00:12:25.455 fused_ordering(505) 00:12:25.455 fused_ordering(506) 00:12:25.455 fused_ordering(507) 00:12:25.455 fused_ordering(508) 00:12:25.455 fused_ordering(509) 00:12:25.455 fused_ordering(510) 00:12:25.455 fused_ordering(511) 00:12:25.455 fused_ordering(512) 00:12:25.455 fused_ordering(513) 00:12:25.455 fused_ordering(514) 00:12:25.455 fused_ordering(515) 00:12:25.455 fused_ordering(516) 00:12:25.455 fused_ordering(517) 00:12:25.455 fused_ordering(518) 00:12:25.455 fused_ordering(519) 00:12:25.455 fused_ordering(520) 00:12:25.455 fused_ordering(521) 00:12:25.455 fused_ordering(522) 00:12:25.455 fused_ordering(523) 00:12:25.455 fused_ordering(524) 00:12:25.455 fused_ordering(525) 00:12:25.455 fused_ordering(526) 00:12:25.455 fused_ordering(527) 00:12:25.455 fused_ordering(528) 00:12:25.455 fused_ordering(529) 00:12:25.455 fused_ordering(530) 00:12:25.455 fused_ordering(531) 00:12:25.455 fused_ordering(532) 00:12:25.455 fused_ordering(533) 00:12:25.455 fused_ordering(534) 00:12:25.455 fused_ordering(535) 00:12:25.455 fused_ordering(536) 00:12:25.455 fused_ordering(537) 00:12:25.455 fused_ordering(538) 00:12:25.455 fused_ordering(539) 00:12:25.455 fused_ordering(540) 00:12:25.455 fused_ordering(541) 00:12:25.455 fused_ordering(542) 00:12:25.455 fused_ordering(543) 00:12:25.455 fused_ordering(544) 00:12:25.455 fused_ordering(545) 00:12:25.455 fused_ordering(546) 00:12:25.455 fused_ordering(547) 00:12:25.455 fused_ordering(548) 00:12:25.455 fused_ordering(549) 00:12:25.455 fused_ordering(550) 00:12:25.455 fused_ordering(551) 00:12:25.455 fused_ordering(552) 00:12:25.455 fused_ordering(553) 00:12:25.455 fused_ordering(554) 00:12:25.455 fused_ordering(555) 00:12:25.455 fused_ordering(556) 00:12:25.455 fused_ordering(557) 00:12:25.455 fused_ordering(558) 00:12:25.455 fused_ordering(559) 00:12:25.455 fused_ordering(560) 00:12:25.455 fused_ordering(561) 00:12:25.455 fused_ordering(562) 00:12:25.455 fused_ordering(563) 00:12:25.455 fused_ordering(564) 00:12:25.455 fused_ordering(565) 00:12:25.455 fused_ordering(566) 00:12:25.455 fused_ordering(567) 00:12:25.455 fused_ordering(568) 00:12:25.455 fused_ordering(569) 00:12:25.455 fused_ordering(570) 00:12:25.455 fused_ordering(571) 00:12:25.455 fused_ordering(572) 00:12:25.455 fused_ordering(573) 00:12:25.455 fused_ordering(574) 00:12:25.455 fused_ordering(575) 00:12:25.455 fused_ordering(576) 00:12:25.455 fused_ordering(577) 00:12:25.455 fused_ordering(578) 00:12:25.455 fused_ordering(579) 00:12:25.455 fused_ordering(580) 00:12:25.455 fused_ordering(581) 00:12:25.455 fused_ordering(582) 00:12:25.455 fused_ordering(583) 00:12:25.455 fused_ordering(584) 00:12:25.455 fused_ordering(585) 00:12:25.455 fused_ordering(586) 00:12:25.455 fused_ordering(587) 00:12:25.455 fused_ordering(588) 00:12:25.455 fused_ordering(589) 00:12:25.455 fused_ordering(590) 00:12:25.455 fused_ordering(591) 00:12:25.455 fused_ordering(592) 00:12:25.455 fused_ordering(593) 00:12:25.455 fused_ordering(594) 00:12:25.455 fused_ordering(595) 00:12:25.455 fused_ordering(596) 00:12:25.455 fused_ordering(597) 00:12:25.455 fused_ordering(598) 00:12:25.455 fused_ordering(599) 00:12:25.455 fused_ordering(600) 00:12:25.455 fused_ordering(601) 00:12:25.455 fused_ordering(602) 00:12:25.455 fused_ordering(603) 00:12:25.455 fused_ordering(604) 00:12:25.455 fused_ordering(605) 00:12:25.455 fused_ordering(606) 00:12:25.455 fused_ordering(607) 00:12:25.455 fused_ordering(608) 00:12:25.455 fused_ordering(609) 00:12:25.455 fused_ordering(610) 00:12:25.455 fused_ordering(611) 00:12:25.455 fused_ordering(612) 00:12:25.455 fused_ordering(613) 00:12:25.455 fused_ordering(614) 00:12:25.455 fused_ordering(615) 00:12:26.023 fused_ordering(616) 00:12:26.023 fused_ordering(617) 00:12:26.023 fused_ordering(618) 00:12:26.023 fused_ordering(619) 00:12:26.023 fused_ordering(620) 00:12:26.023 fused_ordering(621) 00:12:26.023 fused_ordering(622) 00:12:26.023 fused_ordering(623) 00:12:26.023 fused_ordering(624) 00:12:26.023 fused_ordering(625) 00:12:26.023 fused_ordering(626) 00:12:26.023 fused_ordering(627) 00:12:26.023 fused_ordering(628) 00:12:26.023 fused_ordering(629) 00:12:26.023 fused_ordering(630) 00:12:26.023 fused_ordering(631) 00:12:26.023 fused_ordering(632) 00:12:26.023 fused_ordering(633) 00:12:26.023 fused_ordering(634) 00:12:26.023 fused_ordering(635) 00:12:26.023 fused_ordering(636) 00:12:26.023 fused_ordering(637) 00:12:26.023 fused_ordering(638) 00:12:26.023 fused_ordering(639) 00:12:26.023 fused_ordering(640) 00:12:26.023 fused_ordering(641) 00:12:26.023 fused_ordering(642) 00:12:26.023 fused_ordering(643) 00:12:26.023 fused_ordering(644) 00:12:26.023 fused_ordering(645) 00:12:26.023 fused_ordering(646) 00:12:26.023 fused_ordering(647) 00:12:26.023 fused_ordering(648) 00:12:26.023 fused_ordering(649) 00:12:26.023 fused_ordering(650) 00:12:26.023 fused_ordering(651) 00:12:26.023 fused_ordering(652) 00:12:26.023 fused_ordering(653) 00:12:26.023 fused_ordering(654) 00:12:26.023 fused_ordering(655) 00:12:26.023 fused_ordering(656) 00:12:26.023 fused_ordering(657) 00:12:26.023 fused_ordering(658) 00:12:26.023 fused_ordering(659) 00:12:26.023 fused_ordering(660) 00:12:26.023 fused_ordering(661) 00:12:26.023 fused_ordering(662) 00:12:26.023 fused_ordering(663) 00:12:26.023 fused_ordering(664) 00:12:26.023 fused_ordering(665) 00:12:26.023 fused_ordering(666) 00:12:26.023 fused_ordering(667) 00:12:26.023 fused_ordering(668) 00:12:26.023 fused_ordering(669) 00:12:26.023 fused_ordering(670) 00:12:26.023 fused_ordering(671) 00:12:26.023 fused_ordering(672) 00:12:26.023 fused_ordering(673) 00:12:26.023 fused_ordering(674) 00:12:26.023 fused_ordering(675) 00:12:26.023 fused_ordering(676) 00:12:26.023 fused_ordering(677) 00:12:26.023 fused_ordering(678) 00:12:26.023 fused_ordering(679) 00:12:26.023 fused_ordering(680) 00:12:26.023 fused_ordering(681) 00:12:26.023 fused_ordering(682) 00:12:26.023 fused_ordering(683) 00:12:26.023 fused_ordering(684) 00:12:26.023 fused_ordering(685) 00:12:26.023 fused_ordering(686) 00:12:26.023 fused_ordering(687) 00:12:26.023 fused_ordering(688) 00:12:26.023 fused_ordering(689) 00:12:26.023 fused_ordering(690) 00:12:26.023 fused_ordering(691) 00:12:26.023 fused_ordering(692) 00:12:26.023 fused_ordering(693) 00:12:26.023 fused_ordering(694) 00:12:26.023 fused_ordering(695) 00:12:26.023 fused_ordering(696) 00:12:26.023 fused_ordering(697) 00:12:26.023 fused_ordering(698) 00:12:26.023 fused_ordering(699) 00:12:26.023 fused_ordering(700) 00:12:26.023 fused_ordering(701) 00:12:26.023 fused_ordering(702) 00:12:26.023 fused_ordering(703) 00:12:26.023 fused_ordering(704) 00:12:26.023 fused_ordering(705) 00:12:26.023 fused_ordering(706) 00:12:26.023 fused_ordering(707) 00:12:26.023 fused_ordering(708) 00:12:26.023 fused_ordering(709) 00:12:26.023 fused_ordering(710) 00:12:26.023 fused_ordering(711) 00:12:26.023 fused_ordering(712) 00:12:26.023 fused_ordering(713) 00:12:26.023 fused_ordering(714) 00:12:26.023 fused_ordering(715) 00:12:26.023 fused_ordering(716) 00:12:26.023 fused_ordering(717) 00:12:26.023 fused_ordering(718) 00:12:26.023 fused_ordering(719) 00:12:26.023 fused_ordering(720) 00:12:26.023 fused_ordering(721) 00:12:26.023 fused_ordering(722) 00:12:26.023 fused_ordering(723) 00:12:26.023 fused_ordering(724) 00:12:26.023 fused_ordering(725) 00:12:26.023 fused_ordering(726) 00:12:26.023 fused_ordering(727) 00:12:26.023 fused_ordering(728) 00:12:26.023 fused_ordering(729) 00:12:26.023 fused_ordering(730) 00:12:26.023 fused_ordering(731) 00:12:26.023 fused_ordering(732) 00:12:26.023 fused_ordering(733) 00:12:26.023 fused_ordering(734) 00:12:26.023 fused_ordering(735) 00:12:26.023 fused_ordering(736) 00:12:26.023 fused_ordering(737) 00:12:26.023 fused_ordering(738) 00:12:26.023 fused_ordering(739) 00:12:26.023 fused_ordering(740) 00:12:26.023 fused_ordering(741) 00:12:26.023 fused_ordering(742) 00:12:26.023 fused_ordering(743) 00:12:26.023 fused_ordering(744) 00:12:26.023 fused_ordering(745) 00:12:26.023 fused_ordering(746) 00:12:26.023 fused_ordering(747) 00:12:26.023 fused_ordering(748) 00:12:26.023 fused_ordering(749) 00:12:26.023 fused_ordering(750) 00:12:26.023 fused_ordering(751) 00:12:26.023 fused_ordering(752) 00:12:26.023 fused_ordering(753) 00:12:26.023 fused_ordering(754) 00:12:26.023 fused_ordering(755) 00:12:26.023 fused_ordering(756) 00:12:26.023 fused_ordering(757) 00:12:26.023 fused_ordering(758) 00:12:26.023 fused_ordering(759) 00:12:26.023 fused_ordering(760) 00:12:26.023 fused_ordering(761) 00:12:26.023 fused_ordering(762) 00:12:26.023 fused_ordering(763) 00:12:26.023 fused_ordering(764) 00:12:26.023 fused_ordering(765) 00:12:26.023 fused_ordering(766) 00:12:26.023 fused_ordering(767) 00:12:26.023 fused_ordering(768) 00:12:26.023 fused_ordering(769) 00:12:26.023 fused_ordering(770) 00:12:26.024 fused_ordering(771) 00:12:26.024 fused_ordering(772) 00:12:26.024 fused_ordering(773) 00:12:26.024 fused_ordering(774) 00:12:26.024 fused_ordering(775) 00:12:26.024 fused_ordering(776) 00:12:26.024 fused_ordering(777) 00:12:26.024 fused_ordering(778) 00:12:26.024 fused_ordering(779) 00:12:26.024 fused_ordering(780) 00:12:26.024 fused_ordering(781) 00:12:26.024 fused_ordering(782) 00:12:26.024 fused_ordering(783) 00:12:26.024 fused_ordering(784) 00:12:26.024 fused_ordering(785) 00:12:26.024 fused_ordering(786) 00:12:26.024 fused_ordering(787) 00:12:26.024 fused_ordering(788) 00:12:26.024 fused_ordering(789) 00:12:26.024 fused_ordering(790) 00:12:26.024 fused_ordering(791) 00:12:26.024 fused_ordering(792) 00:12:26.024 fused_ordering(793) 00:12:26.024 fused_ordering(794) 00:12:26.024 fused_ordering(795) 00:12:26.024 fused_ordering(796) 00:12:26.024 fused_ordering(797) 00:12:26.024 fused_ordering(798) 00:12:26.024 fused_ordering(799) 00:12:26.024 fused_ordering(800) 00:12:26.024 fused_ordering(801) 00:12:26.024 fused_ordering(802) 00:12:26.024 fused_ordering(803) 00:12:26.024 fused_ordering(804) 00:12:26.024 fused_ordering(805) 00:12:26.024 fused_ordering(806) 00:12:26.024 fused_ordering(807) 00:12:26.024 fused_ordering(808) 00:12:26.024 fused_ordering(809) 00:12:26.024 fused_ordering(810) 00:12:26.024 fused_ordering(811) 00:12:26.024 fused_ordering(812) 00:12:26.024 fused_ordering(813) 00:12:26.024 fused_ordering(814) 00:12:26.024 fused_ordering(815) 00:12:26.024 fused_ordering(816) 00:12:26.024 fused_ordering(817) 00:12:26.024 fused_ordering(818) 00:12:26.024 fused_ordering(819) 00:12:26.024 fused_ordering(820) 00:12:26.593 fused_ordering(821) 00:12:26.593 fused_ordering(822) 00:12:26.593 fused_ordering(823) 00:12:26.593 fused_ordering(824) 00:12:26.593 fused_ordering(825) 00:12:26.593 fused_ordering(826) 00:12:26.593 fused_ordering(827) 00:12:26.593 fused_ordering(828) 00:12:26.593 fused_ordering(829) 00:12:26.593 fused_ordering(830) 00:12:26.593 fused_ordering(831) 00:12:26.593 fused_ordering(832) 00:12:26.593 fused_ordering(833) 00:12:26.593 fused_ordering(834) 00:12:26.593 fused_ordering(835) 00:12:26.593 fused_ordering(836) 00:12:26.593 fused_ordering(837) 00:12:26.593 fused_ordering(838) 00:12:26.593 fused_ordering(839) 00:12:26.593 fused_ordering(840) 00:12:26.593 fused_ordering(841) 00:12:26.593 fused_ordering(842) 00:12:26.593 fused_ordering(843) 00:12:26.593 fused_ordering(844) 00:12:26.593 fused_ordering(845) 00:12:26.593 fused_ordering(846) 00:12:26.593 fused_ordering(847) 00:12:26.593 fused_ordering(848) 00:12:26.593 fused_ordering(849) 00:12:26.593 fused_ordering(850) 00:12:26.593 fused_ordering(851) 00:12:26.593 fused_ordering(852) 00:12:26.593 fused_ordering(853) 00:12:26.593 fused_ordering(854) 00:12:26.593 fused_ordering(855) 00:12:26.593 fused_ordering(856) 00:12:26.593 fused_ordering(857) 00:12:26.593 fused_ordering(858) 00:12:26.593 fused_ordering(859) 00:12:26.593 fused_ordering(860) 00:12:26.593 fused_ordering(861) 00:12:26.593 fused_ordering(862) 00:12:26.593 fused_ordering(863) 00:12:26.593 fused_ordering(864) 00:12:26.593 fused_ordering(865) 00:12:26.593 fused_ordering(866) 00:12:26.593 fused_ordering(867) 00:12:26.593 fused_ordering(868) 00:12:26.593 fused_ordering(869) 00:12:26.593 fused_ordering(870) 00:12:26.593 fused_ordering(871) 00:12:26.593 fused_ordering(872) 00:12:26.593 fused_ordering(873) 00:12:26.593 fused_ordering(874) 00:12:26.593 fused_ordering(875) 00:12:26.593 fused_ordering(876) 00:12:26.593 fused_ordering(877) 00:12:26.593 fused_ordering(878) 00:12:26.593 fused_ordering(879) 00:12:26.593 fused_ordering(880) 00:12:26.593 fused_ordering(881) 00:12:26.593 fused_ordering(882) 00:12:26.593 fused_ordering(883) 00:12:26.593 fused_ordering(884) 00:12:26.593 fused_ordering(885) 00:12:26.593 fused_ordering(886) 00:12:26.593 fused_ordering(887) 00:12:26.593 fused_ordering(888) 00:12:26.593 fused_ordering(889) 00:12:26.593 fused_ordering(890) 00:12:26.593 fused_ordering(891) 00:12:26.593 fused_ordering(892) 00:12:26.593 fused_ordering(893) 00:12:26.593 fused_ordering(894) 00:12:26.593 fused_ordering(895) 00:12:26.593 fused_ordering(896) 00:12:26.593 fused_ordering(897) 00:12:26.593 fused_ordering(898) 00:12:26.593 fused_ordering(899) 00:12:26.593 fused_ordering(900) 00:12:26.593 fused_ordering(901) 00:12:26.593 fused_ordering(902) 00:12:26.593 fused_ordering(903) 00:12:26.593 fused_ordering(904) 00:12:26.593 fused_ordering(905) 00:12:26.593 fused_ordering(906) 00:12:26.593 fused_ordering(907) 00:12:26.593 fused_ordering(908) 00:12:26.593 fused_ordering(909) 00:12:26.593 fused_ordering(910) 00:12:26.593 fused_ordering(911) 00:12:26.593 fused_ordering(912) 00:12:26.593 fused_ordering(913) 00:12:26.593 fused_ordering(914) 00:12:26.593 fused_ordering(915) 00:12:26.593 fused_ordering(916) 00:12:26.593 fused_ordering(917) 00:12:26.593 fused_ordering(918) 00:12:26.593 fused_ordering(919) 00:12:26.593 fused_ordering(920) 00:12:26.593 fused_ordering(921) 00:12:26.593 fused_ordering(922) 00:12:26.593 fused_ordering(923) 00:12:26.593 fused_ordering(924) 00:12:26.593 fused_ordering(925) 00:12:26.593 fused_ordering(926) 00:12:26.593 fused_ordering(927) 00:12:26.593 fused_ordering(928) 00:12:26.593 fused_ordering(929) 00:12:26.593 fused_ordering(930) 00:12:26.593 fused_ordering(931) 00:12:26.593 fused_ordering(932) 00:12:26.593 fused_ordering(933) 00:12:26.593 fused_ordering(934) 00:12:26.593 fused_ordering(935) 00:12:26.593 fused_ordering(936) 00:12:26.593 fused_ordering(937) 00:12:26.593 fused_ordering(938) 00:12:26.593 fused_ordering(939) 00:12:26.593 fused_ordering(940) 00:12:26.593 fused_ordering(941) 00:12:26.593 fused_ordering(942) 00:12:26.593 fused_ordering(943) 00:12:26.593 fused_ordering(944) 00:12:26.593 fused_ordering(945) 00:12:26.593 fused_ordering(946) 00:12:26.593 fused_ordering(947) 00:12:26.593 fused_ordering(948) 00:12:26.593 fused_ordering(949) 00:12:26.593 fused_ordering(950) 00:12:26.593 fused_ordering(951) 00:12:26.593 fused_ordering(952) 00:12:26.593 fused_ordering(953) 00:12:26.593 fused_ordering(954) 00:12:26.593 fused_ordering(955) 00:12:26.593 fused_ordering(956) 00:12:26.593 fused_ordering(957) 00:12:26.593 fused_ordering(958) 00:12:26.593 fused_ordering(959) 00:12:26.593 fused_ordering(960) 00:12:26.593 fused_ordering(961) 00:12:26.593 fused_ordering(962) 00:12:26.593 fused_ordering(963) 00:12:26.593 fused_ordering(964) 00:12:26.593 fused_ordering(965) 00:12:26.593 fused_ordering(966) 00:12:26.593 fused_ordering(967) 00:12:26.593 fused_ordering(968) 00:12:26.593 fused_ordering(969) 00:12:26.593 fused_ordering(970) 00:12:26.593 fused_ordering(971) 00:12:26.593 fused_ordering(972) 00:12:26.593 fused_ordering(973) 00:12:26.593 fused_ordering(974) 00:12:26.593 fused_ordering(975) 00:12:26.593 fused_ordering(976) 00:12:26.593 fused_ordering(977) 00:12:26.593 fused_ordering(978) 00:12:26.593 fused_ordering(979) 00:12:26.593 fused_ordering(980) 00:12:26.593 fused_ordering(981) 00:12:26.593 fused_ordering(982) 00:12:26.593 fused_ordering(983) 00:12:26.593 fused_ordering(984) 00:12:26.593 fused_ordering(985) 00:12:26.593 fused_ordering(986) 00:12:26.593 fused_ordering(987) 00:12:26.593 fused_ordering(988) 00:12:26.593 fused_ordering(989) 00:12:26.593 fused_ordering(990) 00:12:26.593 fused_ordering(991) 00:12:26.593 fused_ordering(992) 00:12:26.593 fused_ordering(993) 00:12:26.593 fused_ordering(994) 00:12:26.593 fused_ordering(995) 00:12:26.593 fused_ordering(996) 00:12:26.593 fused_ordering(997) 00:12:26.593 fused_ordering(998) 00:12:26.593 fused_ordering(999) 00:12:26.593 fused_ordering(1000) 00:12:26.593 fused_ordering(1001) 00:12:26.593 fused_ordering(1002) 00:12:26.593 fused_ordering(1003) 00:12:26.593 fused_ordering(1004) 00:12:26.593 fused_ordering(1005) 00:12:26.593 fused_ordering(1006) 00:12:26.593 fused_ordering(1007) 00:12:26.593 fused_ordering(1008) 00:12:26.593 fused_ordering(1009) 00:12:26.593 fused_ordering(1010) 00:12:26.593 fused_ordering(1011) 00:12:26.593 fused_ordering(1012) 00:12:26.593 fused_ordering(1013) 00:12:26.593 fused_ordering(1014) 00:12:26.593 fused_ordering(1015) 00:12:26.593 fused_ordering(1016) 00:12:26.593 fused_ordering(1017) 00:12:26.593 fused_ordering(1018) 00:12:26.593 fused_ordering(1019) 00:12:26.593 fused_ordering(1020) 00:12:26.593 fused_ordering(1021) 00:12:26.593 fused_ordering(1022) 00:12:26.593 fused_ordering(1023) 00:12:26.593 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:26.593 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:26.593 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.593 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:26.593 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.593 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:26.593 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.593 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.594 rmmod nvme_tcp 00:12:26.594 rmmod nvme_fabrics 00:12:26.594 rmmod nvme_keyring 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75920 ']' 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75920 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75920 ']' 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75920 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.594 15:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75920 00:12:26.594 killing process with pid 75920 00:12:26.594 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:26.594 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:26.594 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75920' 00:12:26.594 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75920 00:12:26.594 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75920 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:26.853 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:12:27.112 00:12:27.112 real 0m4.482s 00:12:27.112 user 0m4.570s 00:12:27.112 sys 0m1.707s 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.112 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.371 ************************************ 00:12:27.371 END TEST nvmf_fused_ordering 00:12:27.371 ************************************ 00:12:27.371 15:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:27.371 15:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.371 15:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.371 15:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:27.371 ************************************ 00:12:27.371 START TEST nvmf_ns_masking 00:12:27.371 ************************************ 00:12:27.371 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:27.371 * Looking for test storage... 00:12:27.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:27.371 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:27.371 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:27.371 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:27.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.632 --rc genhtml_branch_coverage=1 00:12:27.632 --rc genhtml_function_coverage=1 00:12:27.632 --rc genhtml_legend=1 00:12:27.632 --rc geninfo_all_blocks=1 00:12:27.632 --rc geninfo_unexecuted_blocks=1 00:12:27.632 00:12:27.632 ' 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:27.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.632 --rc genhtml_branch_coverage=1 00:12:27.632 --rc genhtml_function_coverage=1 00:12:27.632 --rc genhtml_legend=1 00:12:27.632 --rc geninfo_all_blocks=1 00:12:27.632 --rc geninfo_unexecuted_blocks=1 00:12:27.632 00:12:27.632 ' 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:27.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.632 --rc genhtml_branch_coverage=1 00:12:27.632 --rc genhtml_function_coverage=1 00:12:27.632 --rc genhtml_legend=1 00:12:27.632 --rc geninfo_all_blocks=1 00:12:27.632 --rc geninfo_unexecuted_blocks=1 00:12:27.632 00:12:27.632 ' 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:27.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.632 --rc genhtml_branch_coverage=1 00:12:27.632 --rc genhtml_function_coverage=1 00:12:27.632 --rc genhtml_legend=1 00:12:27.632 --rc geninfo_all_blocks=1 00:12:27.632 --rc geninfo_unexecuted_blocks=1 00:12:27.632 00:12:27.632 ' 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.632 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.633 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=17374df1-007f-4134-b954-bd496156c65d 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b3c2573b-dd70-4b39-8505-a05bdbac9379 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8dad6680-8f87-41ce-b4dd-1eaa013ce9e2 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:27.633 15:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:27.633 Cannot find device "nvmf_init_br" 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:27.633 Cannot find device "nvmf_init_br2" 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:27.633 Cannot find device "nvmf_tgt_br" 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.633 Cannot find device "nvmf_tgt_br2" 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:27.633 Cannot find device "nvmf_init_br" 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:27.633 Cannot find device "nvmf_init_br2" 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:12:27.633 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:27.893 Cannot find device "nvmf_tgt_br" 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:27.893 Cannot find device "nvmf_tgt_br2" 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:27.893 Cannot find device "nvmf_br" 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:27.893 Cannot find device "nvmf_init_if" 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:27.893 Cannot find device "nvmf_init_if2" 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:27.893 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:28.153 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:28.154 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:28.154 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:12:28.154 00:12:28.154 --- 10.0.0.3 ping statistics --- 00:12:28.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.154 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:28.154 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:28.154 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:12:28.154 00:12:28.154 --- 10.0.0.4 ping statistics --- 00:12:28.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.154 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:28.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:12:28.154 00:12:28.154 --- 10.0.0.1 ping statistics --- 00:12:28.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.154 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:28.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:12:28.154 00:12:28.154 --- 10.0.0.2 ping statistics --- 00:12:28.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.154 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=76240 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 76240 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76240 ']' 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.154 15:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:28.154 [2024-11-20 15:20:14.547420] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:28.154 [2024-11-20 15:20:14.547499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.415 [2024-11-20 15:20:14.682174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.415 [2024-11-20 15:20:14.762874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.415 [2024-11-20 15:20:14.763160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.415 [2024-11-20 15:20:14.763179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.415 [2024-11-20 15:20:14.763187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.415 [2024-11-20 15:20:14.763196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.415 [2024-11-20 15:20:14.763604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:29.373 [2024-11-20 15:20:15.734460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:29.373 15:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:29.631 Malloc1 00:12:29.631 15:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:29.890 Malloc2 00:12:29.890 15:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:30.149 15:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:30.407 15:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:30.666 [2024-11-20 15:20:16.895045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:30.666 15:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:30.666 15:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8dad6680-8f87-41ce-b4dd-1eaa013ce9e2 -a 10.0.0.3 -s 4420 -i 4 00:12:30.666 15:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.666 15:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.666 15:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.666 15:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:30.666 15:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.200 [ 0]:0x1 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7ab71b5c53f49f2bcc7b38e609e8f6b 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7ab71b5c53f49f2bcc7b38e609e8f6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.200 [ 0]:0x1 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7ab71b5c53f49f2bcc7b38e609e8f6b 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7ab71b5c53f49f2bcc7b38e609e8f6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:33.200 [ 1]:0x2 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=902de27ca3ac497f99fc185c6d696b54 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 902de27ca3ac497f99fc185c6d696b54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.200 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.458 15:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:33.722 15:20:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:33.723 15:20:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8dad6680-8f87-41ce-b4dd-1eaa013ce9e2 -a 10.0.0.3 -s 4420 -i 4 00:12:33.980 15:20:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:33.980 15:20:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:33.980 15:20:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.980 15:20:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:33.980 15:20:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:33.980 15:20:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:35.884 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:36.144 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.144 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.144 [ 0]:0x2 00:12:36.144 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.144 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.144 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=902de27ca3ac497f99fc185c6d696b54 00:12:36.144 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 902de27ca3ac497f99fc185c6d696b54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.144 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.403 [ 0]:0x1 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7ab71b5c53f49f2bcc7b38e609e8f6b 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7ab71b5c53f49f2bcc7b38e609e8f6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.403 [ 1]:0x2 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=902de27ca3ac497f99fc185c6d696b54 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 902de27ca3ac497f99fc185c6d696b54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.403 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.662 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.662 [ 0]:0x2 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=902de27ca3ac497f99fc185c6d696b54 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 902de27ca3ac497f99fc185c6d696b54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:36.662 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.921 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.921 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:36.921 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8dad6680-8f87-41ce-b4dd-1eaa013ce9e2 -a 10.0.0.3 -s 4420 -i 4 00:12:37.181 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:37.181 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:37.181 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.181 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:37.181 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:37.181 15:20:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:39.086 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:39.086 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:39.086 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.086 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:39.086 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.086 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:39.086 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:39.086 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.346 [ 0]:0x1 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7ab71b5c53f49f2bcc7b38e609e8f6b 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7ab71b5c53f49f2bcc7b38e609e8f6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.346 [ 1]:0x2 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=902de27ca3ac497f99fc185c6d696b54 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 902de27ca3ac497f99fc185c6d696b54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.346 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:39.605 15:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.605 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.605 [ 0]:0x2 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=902de27ca3ac497f99fc185c6d696b54 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 902de27ca3ac497f99fc185c6d696b54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:39.606 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.865 [2024-11-20 15:20:26.262282] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:39.865 2024/11/20 15:20:26 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:12:39.865 request: 00:12:39.865 { 00:12:39.865 "method": "nvmf_ns_remove_host", 00:12:39.865 "params": { 00:12:39.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.865 "nsid": 2, 00:12:39.865 "host": "nqn.2016-06.io.spdk:host1" 00:12:39.865 } 00:12:39.865 } 00:12:39.865 Got JSON-RPC error response 00:12:39.865 GoRPCClient: error on JSON-RPC call 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.865 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:40.125 [ 0]:0x2 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=902de27ca3ac497f99fc185c6d696b54 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 902de27ca3ac497f99fc185c6d696b54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76618 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76618 /var/tmp/host.sock 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76618 ']' 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:40.125 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.126 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:40.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:40.126 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.126 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.126 [2024-11-20 15:20:26.531969] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:12:40.126 [2024-11-20 15:20:26.532047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76618 ] 00:12:40.383 [2024-11-20 15:20:26.684999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.384 [2024-11-20 15:20:26.762699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.950 15:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.950 15:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:40.950 15:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.209 15:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:41.468 15:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 17374df1-007f-4134-b954-bd496156c65d 00:12:41.468 15:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:41.468 15:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 17374DF1007F4134B954BD496156C65D -i 00:12:41.727 15:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b3c2573b-dd70-4b39-8505-a05bdbac9379 00:12:41.727 15:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:41.727 15:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B3C2573BDD704B398505A05BDBAC9379 -i 00:12:41.987 15:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:42.246 15:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:42.505 15:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:42.505 15:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:42.763 nvme0n1 00:12:42.763 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:42.763 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:43.021 nvme1n2 00:12:43.021 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:43.021 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:43.021 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:43.021 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:43.021 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:43.279 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:43.280 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:43.280 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:43.280 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:43.538 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 17374df1-007f-4134-b954-bd496156c65d == \1\7\3\7\4\d\f\1\-\0\0\7\f\-\4\1\3\4\-\b\9\5\4\-\b\d\4\9\6\1\5\6\c\6\5\d ]] 00:12:43.538 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:43.538 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:43.538 15:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:43.805 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b3c2573b-dd70-4b39-8505-a05bdbac9379 == \b\3\c\2\5\7\3\b\-\d\d\7\0\-\4\b\3\9\-\8\5\0\5\-\a\0\5\b\d\b\a\c\9\3\7\9 ]] 00:12:43.805 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.109 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 17374df1-007f-4134-b954-bd496156c65d 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 17374DF1007F4134B954BD496156C65D 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 17374DF1007F4134B954BD496156C65D 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:44.367 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 17374DF1007F4134B954BD496156C65D 00:12:44.367 [2024-11-20 15:20:30.828421] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:44.367 [2024-11-20 15:20:30.828481] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:44.367 [2024-11-20 15:20:30.828494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.367 2024/11/20 15:20:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid nguid:17374DF1007F4134B954BD496156C65D no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:44.367 request: 00:12:44.367 { 00:12:44.367 "method": "nvmf_subsystem_add_ns", 00:12:44.367 "params": { 00:12:44.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.367 "namespace": { 00:12:44.367 "bdev_name": "invalid", 00:12:44.367 "nsid": 1, 00:12:44.367 "nguid": "17374DF1007F4134B954BD496156C65D", 00:12:44.367 "no_auto_visible": false 00:12:44.367 } 00:12:44.367 } 00:12:44.367 } 00:12:44.367 Got JSON-RPC error response 00:12:44.367 GoRPCClient: error on JSON-RPC call 00:12:44.626 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:44.626 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.626 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.626 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.626 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 17374df1-007f-4134-b954-bd496156c65d 00:12:44.626 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:44.626 15:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 17374DF1007F4134B954BD496156C65D -i 00:12:44.626 15:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76618 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76618 ']' 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76618 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76618 00:12:47.160 killing process with pid 76618 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76618' 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76618 00:12:47.160 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76618 00:12:47.726 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.726 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:47.726 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:47.726 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.726 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:47.986 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.986 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:47.986 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.986 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.986 rmmod nvme_tcp 00:12:47.986 rmmod nvme_fabrics 00:12:48.245 rmmod nvme_keyring 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 76240 ']' 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 76240 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76240 ']' 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76240 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76240 00:12:48.245 killing process with pid 76240 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76240' 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76240 00:12:48.245 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76240 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:48.504 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.763 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:12:48.763 00:12:48.763 real 0m21.537s 00:12:48.763 user 0m33.271s 00:12:48.763 sys 0m4.909s 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.763 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:48.763 ************************************ 00:12:48.763 END TEST nvmf_ns_masking 00:12:48.763 ************************************ 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.022 ************************************ 00:12:49.022 START TEST nvmf_auth_target 00:12:49.022 ************************************ 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:49.022 * Looking for test storage... 00:12:49.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.022 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.282 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:49.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.282 --rc genhtml_branch_coverage=1 00:12:49.283 --rc genhtml_function_coverage=1 00:12:49.283 --rc genhtml_legend=1 00:12:49.283 --rc geninfo_all_blocks=1 00:12:49.283 --rc geninfo_unexecuted_blocks=1 00:12:49.283 00:12:49.283 ' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:49.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.283 --rc genhtml_branch_coverage=1 00:12:49.283 --rc genhtml_function_coverage=1 00:12:49.283 --rc genhtml_legend=1 00:12:49.283 --rc geninfo_all_blocks=1 00:12:49.283 --rc geninfo_unexecuted_blocks=1 00:12:49.283 00:12:49.283 ' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:49.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.283 --rc genhtml_branch_coverage=1 00:12:49.283 --rc genhtml_function_coverage=1 00:12:49.283 --rc genhtml_legend=1 00:12:49.283 --rc geninfo_all_blocks=1 00:12:49.283 --rc geninfo_unexecuted_blocks=1 00:12:49.283 00:12:49.283 ' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:49.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.283 --rc genhtml_branch_coverage=1 00:12:49.283 --rc genhtml_function_coverage=1 00:12:49.283 --rc genhtml_legend=1 00:12:49.283 --rc geninfo_all_blocks=1 00:12:49.283 --rc geninfo_unexecuted_blocks=1 00:12:49.283 00:12:49.283 ' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.283 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:49.283 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:49.284 Cannot find device "nvmf_init_br" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:49.284 Cannot find device "nvmf_init_br2" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:49.284 Cannot find device "nvmf_tgt_br" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.284 Cannot find device "nvmf_tgt_br2" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:49.284 Cannot find device "nvmf_init_br" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:49.284 Cannot find device "nvmf_init_br2" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:49.284 Cannot find device "nvmf_tgt_br" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:49.284 Cannot find device "nvmf_tgt_br2" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:49.284 Cannot find device "nvmf_br" 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:49.284 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:49.547 Cannot find device "nvmf_init_if" 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:49.548 Cannot find device "nvmf_init_if2" 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:49.548 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:49.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:49.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:12:49.548 00:12:49.548 --- 10.0.0.3 ping statistics --- 00:12:49.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.548 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:49.548 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:49.548 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:12:49.548 00:12:49.548 --- 10.0.0.4 ping statistics --- 00:12:49.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.548 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:49.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:49.548 00:12:49.548 --- 10.0.0.1 ping statistics --- 00:12:49.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.548 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:49.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:12:49.548 00:12:49.548 --- 10.0.0.2 ping statistics --- 00:12:49.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.548 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.548 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=77118 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 77118 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 77118 ']' 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.808 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=77159 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:50.745 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9fa0905a38dfea8730d06ed1603774a495b72917dded97b3 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kXq 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9fa0905a38dfea8730d06ed1603774a495b72917dded97b3 0 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9fa0905a38dfea8730d06ed1603774a495b72917dded97b3 0 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9fa0905a38dfea8730d06ed1603774a495b72917dded97b3 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kXq 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kXq 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.kXq 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4396eee37264b61e820411109b292aef362e8cca272e658acf4446c5063d5d1a 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5bT 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4396eee37264b61e820411109b292aef362e8cca272e658acf4446c5063d5d1a 3 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4396eee37264b61e820411109b292aef362e8cca272e658acf4446c5063d5d1a 3 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4396eee37264b61e820411109b292aef362e8cca272e658acf4446c5063d5d1a 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:50.746 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5bT 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5bT 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.5bT 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ab06f72814e0336d98ec3d498d40ada6 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kNW 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ab06f72814e0336d98ec3d498d40ada6 1 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ab06f72814e0336d98ec3d498d40ada6 1 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ab06f72814e0336d98ec3d498d40ada6 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kNW 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kNW 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kNW 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=937337df768abdb7415265bd15ad3091570d61c745b5ebd7 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4hF 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 937337df768abdb7415265bd15ad3091570d61c745b5ebd7 2 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 937337df768abdb7415265bd15ad3091570d61c745b5ebd7 2 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=937337df768abdb7415265bd15ad3091570d61c745b5ebd7 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4hF 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4hF 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.4hF 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fc6246d52ae41e50773cc8d42757dc5b4b40ca644254c6d2 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.E8N 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fc6246d52ae41e50773cc8d42757dc5b4b40ca644254c6d2 2 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fc6246d52ae41e50773cc8d42757dc5b4b40ca644254c6d2 2 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fc6246d52ae41e50773cc8d42757dc5b4b40ca644254c6d2 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.E8N 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.E8N 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.E8N 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:51.006 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:51.007 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:51.266 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:51.266 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f782088221ec39298cd4e39127b32e1e 00:12:51.266 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Z6a 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f782088221ec39298cd4e39127b32e1e 1 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f782088221ec39298cd4e39127b32e1e 1 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f782088221ec39298cd4e39127b32e1e 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Z6a 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Z6a 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Z6a 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0f6c79ca29f39372e06a4f1cb65f3d72764d795068d3eb786aabcd76b2c5c4a7 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GUo 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0f6c79ca29f39372e06a4f1cb65f3d72764d795068d3eb786aabcd76b2c5c4a7 3 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0f6c79ca29f39372e06a4f1cb65f3d72764d795068d3eb786aabcd76b2c5c4a7 3 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0f6c79ca29f39372e06a4f1cb65f3d72764d795068d3eb786aabcd76b2c5c4a7 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GUo 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GUo 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.GUo 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 77118 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 77118 ']' 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.267 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 77159 /var/tmp/host.sock 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 77159 ']' 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.526 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kXq 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.785 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kXq 00:12:51.786 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kXq 00:12:52.045 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.5bT ]] 00:12:52.045 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5bT 00:12:52.045 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.045 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.045 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.045 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5bT 00:12:52.045 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5bT 00:12:52.304 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:52.304 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kNW 00:12:52.304 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.304 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.304 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.304 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kNW 00:12:52.304 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kNW 00:12:52.563 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.4hF ]] 00:12:52.563 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4hF 00:12:52.563 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.563 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.563 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.563 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4hF 00:12:52.563 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4hF 00:12:52.821 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:52.821 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.E8N 00:12:52.821 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.821 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.821 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.821 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.E8N 00:12:52.821 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.E8N 00:12:53.080 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Z6a ]] 00:12:53.080 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z6a 00:12:53.080 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.080 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.080 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.080 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z6a 00:12:53.080 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z6a 00:12:53.339 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:53.339 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GUo 00:12:53.339 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.339 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.339 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.339 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.GUo 00:12:53.339 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.GUo 00:12:53.598 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:53.598 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:53.598 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:53.598 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.598 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:53.598 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.860 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.128 00:12:54.128 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.128 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.128 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.389 { 00:12:54.389 "auth": { 00:12:54.389 "dhgroup": "null", 00:12:54.389 "digest": "sha256", 00:12:54.389 "state": "completed" 00:12:54.389 }, 00:12:54.389 "cntlid": 1, 00:12:54.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:12:54.389 "listen_address": { 00:12:54.389 "adrfam": "IPv4", 00:12:54.389 "traddr": "10.0.0.3", 00:12:54.389 "trsvcid": "4420", 00:12:54.389 "trtype": "TCP" 00:12:54.389 }, 00:12:54.389 "peer_address": { 00:12:54.389 "adrfam": "IPv4", 00:12:54.389 "traddr": "10.0.0.1", 00:12:54.389 "trsvcid": "34168", 00:12:54.389 "trtype": "TCP" 00:12:54.389 }, 00:12:54.389 "qid": 0, 00:12:54.389 "state": "enabled", 00:12:54.389 "thread": "nvmf_tgt_poll_group_000" 00:12:54.389 } 00:12:54.389 ]' 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.389 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.648 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:12:54.648 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.840 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.840 00:12:58.840 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.840 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.840 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.098 { 00:12:59.098 "auth": { 00:12:59.098 "dhgroup": "null", 00:12:59.098 "digest": "sha256", 00:12:59.098 "state": "completed" 00:12:59.098 }, 00:12:59.098 "cntlid": 3, 00:12:59.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:12:59.098 "listen_address": { 00:12:59.098 "adrfam": "IPv4", 00:12:59.098 "traddr": "10.0.0.3", 00:12:59.098 "trsvcid": "4420", 00:12:59.098 "trtype": "TCP" 00:12:59.098 }, 00:12:59.098 "peer_address": { 00:12:59.098 "adrfam": "IPv4", 00:12:59.098 "traddr": "10.0.0.1", 00:12:59.098 "trsvcid": "52524", 00:12:59.098 "trtype": "TCP" 00:12:59.098 }, 00:12:59.098 "qid": 0, 00:12:59.098 "state": "enabled", 00:12:59.098 "thread": "nvmf_tgt_poll_group_000" 00:12:59.098 } 00:12:59.098 ]' 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.098 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.356 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:12:59.356 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:12:59.925 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.185 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.444 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.444 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.444 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.444 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.704 00:13:00.704 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.704 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.704 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.963 { 00:13:00.963 "auth": { 00:13:00.963 "dhgroup": "null", 00:13:00.963 "digest": "sha256", 00:13:00.963 "state": "completed" 00:13:00.963 }, 00:13:00.963 "cntlid": 5, 00:13:00.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:00.963 "listen_address": { 00:13:00.963 "adrfam": "IPv4", 00:13:00.963 "traddr": "10.0.0.3", 00:13:00.963 "trsvcid": "4420", 00:13:00.963 "trtype": "TCP" 00:13:00.963 }, 00:13:00.963 "peer_address": { 00:13:00.963 "adrfam": "IPv4", 00:13:00.963 "traddr": "10.0.0.1", 00:13:00.963 "trsvcid": "52556", 00:13:00.963 "trtype": "TCP" 00:13:00.963 }, 00:13:00.963 "qid": 0, 00:13:00.963 "state": "enabled", 00:13:00.963 "thread": "nvmf_tgt_poll_group_000" 00:13:00.963 } 00:13:00.963 ]' 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.963 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.222 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:01.222 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:01.790 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.790 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:01.790 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.790 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.790 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.790 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.790 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:01.790 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.048 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.307 00:13:02.307 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.307 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.307 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.566 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.566 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.566 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.566 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.566 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.566 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.566 { 00:13:02.566 "auth": { 00:13:02.566 "dhgroup": "null", 00:13:02.566 "digest": "sha256", 00:13:02.566 "state": "completed" 00:13:02.566 }, 00:13:02.566 "cntlid": 7, 00:13:02.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:02.566 "listen_address": { 00:13:02.566 "adrfam": "IPv4", 00:13:02.567 "traddr": "10.0.0.3", 00:13:02.567 "trsvcid": "4420", 00:13:02.567 "trtype": "TCP" 00:13:02.567 }, 00:13:02.567 "peer_address": { 00:13:02.567 "adrfam": "IPv4", 00:13:02.567 "traddr": "10.0.0.1", 00:13:02.567 "trsvcid": "52568", 00:13:02.567 "trtype": "TCP" 00:13:02.567 }, 00:13:02.567 "qid": 0, 00:13:02.567 "state": "enabled", 00:13:02.567 "thread": "nvmf_tgt_poll_group_000" 00:13:02.567 } 00:13:02.567 ]' 00:13:02.567 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.825 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.825 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.825 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:02.825 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.825 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.825 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.825 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.084 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:03.084 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:03.654 15:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.917 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.177 00:13:04.177 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.177 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.177 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.437 { 00:13:04.437 "auth": { 00:13:04.437 "dhgroup": "ffdhe2048", 00:13:04.437 "digest": "sha256", 00:13:04.437 "state": "completed" 00:13:04.437 }, 00:13:04.437 "cntlid": 9, 00:13:04.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:04.437 "listen_address": { 00:13:04.437 "adrfam": "IPv4", 00:13:04.437 "traddr": "10.0.0.3", 00:13:04.437 "trsvcid": "4420", 00:13:04.437 "trtype": "TCP" 00:13:04.437 }, 00:13:04.437 "peer_address": { 00:13:04.437 "adrfam": "IPv4", 00:13:04.437 "traddr": "10.0.0.1", 00:13:04.437 "trsvcid": "52598", 00:13:04.437 "trtype": "TCP" 00:13:04.437 }, 00:13:04.437 "qid": 0, 00:13:04.437 "state": "enabled", 00:13:04.437 "thread": "nvmf_tgt_poll_group_000" 00:13:04.437 } 00:13:04.437 ]' 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.437 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.697 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:04.697 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:05.265 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.265 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:05.265 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.265 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.525 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.525 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.525 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:05.525 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.784 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.043 00:13:06.043 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.043 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.043 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.302 { 00:13:06.302 "auth": { 00:13:06.302 "dhgroup": "ffdhe2048", 00:13:06.302 "digest": "sha256", 00:13:06.302 "state": "completed" 00:13:06.302 }, 00:13:06.302 "cntlid": 11, 00:13:06.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:06.302 "listen_address": { 00:13:06.302 "adrfam": "IPv4", 00:13:06.302 "traddr": "10.0.0.3", 00:13:06.302 "trsvcid": "4420", 00:13:06.302 "trtype": "TCP" 00:13:06.302 }, 00:13:06.302 "peer_address": { 00:13:06.302 "adrfam": "IPv4", 00:13:06.302 "traddr": "10.0.0.1", 00:13:06.302 "trsvcid": "52622", 00:13:06.302 "trtype": "TCP" 00:13:06.302 }, 00:13:06.302 "qid": 0, 00:13:06.302 "state": "enabled", 00:13:06.302 "thread": "nvmf_tgt_poll_group_000" 00:13:06.302 } 00:13:06.302 ]' 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:06.302 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.563 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.563 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.563 15:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.822 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:06.822 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:07.391 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.391 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:07.391 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.391 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.391 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.391 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.391 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:07.391 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.651 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.219 00:13:08.219 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.219 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.219 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.219 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.219 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.219 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.219 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.478 { 00:13:08.478 "auth": { 00:13:08.478 "dhgroup": "ffdhe2048", 00:13:08.478 "digest": "sha256", 00:13:08.478 "state": "completed" 00:13:08.478 }, 00:13:08.478 "cntlid": 13, 00:13:08.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:08.478 "listen_address": { 00:13:08.478 "adrfam": "IPv4", 00:13:08.478 "traddr": "10.0.0.3", 00:13:08.478 "trsvcid": "4420", 00:13:08.478 "trtype": "TCP" 00:13:08.478 }, 00:13:08.478 "peer_address": { 00:13:08.478 "adrfam": "IPv4", 00:13:08.478 "traddr": "10.0.0.1", 00:13:08.478 "trsvcid": "51924", 00:13:08.478 "trtype": "TCP" 00:13:08.478 }, 00:13:08.478 "qid": 0, 00:13:08.478 "state": "enabled", 00:13:08.478 "thread": "nvmf_tgt_poll_group_000" 00:13:08.478 } 00:13:08.478 ]' 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.478 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.737 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:08.737 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:09.307 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.307 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:09.307 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.307 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.307 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.307 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.307 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:09.307 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.567 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.826 00:13:10.085 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.085 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.085 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.344 { 00:13:10.344 "auth": { 00:13:10.344 "dhgroup": "ffdhe2048", 00:13:10.344 "digest": "sha256", 00:13:10.344 "state": "completed" 00:13:10.344 }, 00:13:10.344 "cntlid": 15, 00:13:10.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:10.344 "listen_address": { 00:13:10.344 "adrfam": "IPv4", 00:13:10.344 "traddr": "10.0.0.3", 00:13:10.344 "trsvcid": "4420", 00:13:10.344 "trtype": "TCP" 00:13:10.344 }, 00:13:10.344 "peer_address": { 00:13:10.344 "adrfam": "IPv4", 00:13:10.344 "traddr": "10.0.0.1", 00:13:10.344 "trsvcid": "51938", 00:13:10.344 "trtype": "TCP" 00:13:10.344 }, 00:13:10.344 "qid": 0, 00:13:10.344 "state": "enabled", 00:13:10.344 "thread": "nvmf_tgt_poll_group_000" 00:13:10.344 } 00:13:10.344 ]' 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.344 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.604 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:10.604 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:11.172 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.430 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.688 00:13:11.947 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.947 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.947 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.206 { 00:13:12.206 "auth": { 00:13:12.206 "dhgroup": "ffdhe3072", 00:13:12.206 "digest": "sha256", 00:13:12.206 "state": "completed" 00:13:12.206 }, 00:13:12.206 "cntlid": 17, 00:13:12.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:12.206 "listen_address": { 00:13:12.206 "adrfam": "IPv4", 00:13:12.206 "traddr": "10.0.0.3", 00:13:12.206 "trsvcid": "4420", 00:13:12.206 "trtype": "TCP" 00:13:12.206 }, 00:13:12.206 "peer_address": { 00:13:12.206 "adrfam": "IPv4", 00:13:12.206 "traddr": "10.0.0.1", 00:13:12.206 "trsvcid": "51968", 00:13:12.206 "trtype": "TCP" 00:13:12.206 }, 00:13:12.206 "qid": 0, 00:13:12.206 "state": "enabled", 00:13:12.206 "thread": "nvmf_tgt_poll_group_000" 00:13:12.206 } 00:13:12.206 ]' 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.206 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.465 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:12.465 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:13.032 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.032 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:13.032 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.032 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.032 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.032 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.032 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:13.032 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.291 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.858 00:13:13.858 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.858 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.858 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.118 { 00:13:14.118 "auth": { 00:13:14.118 "dhgroup": "ffdhe3072", 00:13:14.118 "digest": "sha256", 00:13:14.118 "state": "completed" 00:13:14.118 }, 00:13:14.118 "cntlid": 19, 00:13:14.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:14.118 "listen_address": { 00:13:14.118 "adrfam": "IPv4", 00:13:14.118 "traddr": "10.0.0.3", 00:13:14.118 "trsvcid": "4420", 00:13:14.118 "trtype": "TCP" 00:13:14.118 }, 00:13:14.118 "peer_address": { 00:13:14.118 "adrfam": "IPv4", 00:13:14.118 "traddr": "10.0.0.1", 00:13:14.118 "trsvcid": "51996", 00:13:14.118 "trtype": "TCP" 00:13:14.118 }, 00:13:14.118 "qid": 0, 00:13:14.118 "state": "enabled", 00:13:14.118 "thread": "nvmf_tgt_poll_group_000" 00:13:14.118 } 00:13:14.118 ]' 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.118 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.377 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:14.377 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:14.943 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.943 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:14.943 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.943 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.943 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.943 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.943 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:14.943 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.201 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.766 00:13:15.766 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.766 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.766 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.766 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.766 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.766 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.766 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.023 { 00:13:16.023 "auth": { 00:13:16.023 "dhgroup": "ffdhe3072", 00:13:16.023 "digest": "sha256", 00:13:16.023 "state": "completed" 00:13:16.023 }, 00:13:16.023 "cntlid": 21, 00:13:16.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:16.023 "listen_address": { 00:13:16.023 "adrfam": "IPv4", 00:13:16.023 "traddr": "10.0.0.3", 00:13:16.023 "trsvcid": "4420", 00:13:16.023 "trtype": "TCP" 00:13:16.023 }, 00:13:16.023 "peer_address": { 00:13:16.023 "adrfam": "IPv4", 00:13:16.023 "traddr": "10.0.0.1", 00:13:16.023 "trsvcid": "52016", 00:13:16.023 "trtype": "TCP" 00:13:16.023 }, 00:13:16.023 "qid": 0, 00:13:16.023 "state": "enabled", 00:13:16.023 "thread": "nvmf_tgt_poll_group_000" 00:13:16.023 } 00:13:16.023 ]' 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.023 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.280 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:16.281 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:16.848 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.848 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:16.848 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.848 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.848 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.848 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.848 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:16.848 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.106 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.107 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.107 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.107 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.107 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.365 00:13:17.365 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.365 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.365 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.653 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.653 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.653 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.653 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.653 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.653 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.653 { 00:13:17.654 "auth": { 00:13:17.654 "dhgroup": "ffdhe3072", 00:13:17.654 "digest": "sha256", 00:13:17.654 "state": "completed" 00:13:17.654 }, 00:13:17.654 "cntlid": 23, 00:13:17.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:17.654 "listen_address": { 00:13:17.654 "adrfam": "IPv4", 00:13:17.654 "traddr": "10.0.0.3", 00:13:17.654 "trsvcid": "4420", 00:13:17.654 "trtype": "TCP" 00:13:17.654 }, 00:13:17.654 "peer_address": { 00:13:17.654 "adrfam": "IPv4", 00:13:17.654 "traddr": "10.0.0.1", 00:13:17.654 "trsvcid": "50334", 00:13:17.654 "trtype": "TCP" 00:13:17.654 }, 00:13:17.654 "qid": 0, 00:13:17.654 "state": "enabled", 00:13:17.654 "thread": "nvmf_tgt_poll_group_000" 00:13:17.654 } 00:13:17.654 ]' 00:13:17.654 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.920 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.920 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.920 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.920 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.920 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.920 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.920 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.177 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:18.177 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:18.744 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.003 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.262 00:13:19.521 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.521 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.521 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.780 { 00:13:19.780 "auth": { 00:13:19.780 "dhgroup": "ffdhe4096", 00:13:19.780 "digest": "sha256", 00:13:19.780 "state": "completed" 00:13:19.780 }, 00:13:19.780 "cntlid": 25, 00:13:19.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:19.780 "listen_address": { 00:13:19.780 "adrfam": "IPv4", 00:13:19.780 "traddr": "10.0.0.3", 00:13:19.780 "trsvcid": "4420", 00:13:19.780 "trtype": "TCP" 00:13:19.780 }, 00:13:19.780 "peer_address": { 00:13:19.780 "adrfam": "IPv4", 00:13:19.780 "traddr": "10.0.0.1", 00:13:19.780 "trsvcid": "50366", 00:13:19.780 "trtype": "TCP" 00:13:19.780 }, 00:13:19.780 "qid": 0, 00:13:19.780 "state": "enabled", 00:13:19.780 "thread": "nvmf_tgt_poll_group_000" 00:13:19.780 } 00:13:19.780 ]' 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.780 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.038 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:20.039 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:20.605 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.605 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:20.605 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.605 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.605 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.605 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.605 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:20.605 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.864 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.431 00:13:21.431 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.432 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.432 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.690 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.690 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.690 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.690 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.690 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.690 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.690 { 00:13:21.690 "auth": { 00:13:21.690 "dhgroup": "ffdhe4096", 00:13:21.690 "digest": "sha256", 00:13:21.690 "state": "completed" 00:13:21.690 }, 00:13:21.690 "cntlid": 27, 00:13:21.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:21.690 "listen_address": { 00:13:21.690 "adrfam": "IPv4", 00:13:21.690 "traddr": "10.0.0.3", 00:13:21.690 "trsvcid": "4420", 00:13:21.690 "trtype": "TCP" 00:13:21.690 }, 00:13:21.690 "peer_address": { 00:13:21.690 "adrfam": "IPv4", 00:13:21.690 "traddr": "10.0.0.1", 00:13:21.690 "trsvcid": "50398", 00:13:21.690 "trtype": "TCP" 00:13:21.690 }, 00:13:21.690 "qid": 0, 00:13:21.690 "state": "enabled", 00:13:21.690 "thread": "nvmf_tgt_poll_group_000" 00:13:21.690 } 00:13:21.690 ]' 00:13:21.690 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.690 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.690 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.690 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:21.690 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.690 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.690 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.690 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.949 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:21.949 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:22.536 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.536 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:22.536 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.536 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.536 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.536 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.536 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:22.536 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.806 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.374 00:13:23.374 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.374 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.374 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.632 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.632 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.633 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.633 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.633 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.633 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.633 { 00:13:23.633 "auth": { 00:13:23.633 "dhgroup": "ffdhe4096", 00:13:23.633 "digest": "sha256", 00:13:23.633 "state": "completed" 00:13:23.633 }, 00:13:23.633 "cntlid": 29, 00:13:23.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:23.633 "listen_address": { 00:13:23.633 "adrfam": "IPv4", 00:13:23.633 "traddr": "10.0.0.3", 00:13:23.633 "trsvcid": "4420", 00:13:23.633 "trtype": "TCP" 00:13:23.633 }, 00:13:23.633 "peer_address": { 00:13:23.633 "adrfam": "IPv4", 00:13:23.633 "traddr": "10.0.0.1", 00:13:23.633 "trsvcid": "50434", 00:13:23.633 "trtype": "TCP" 00:13:23.633 }, 00:13:23.633 "qid": 0, 00:13:23.633 "state": "enabled", 00:13:23.633 "thread": "nvmf_tgt_poll_group_000" 00:13:23.633 } 00:13:23.633 ]' 00:13:23.633 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.633 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.633 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.633 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:23.633 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.633 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.633 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.633 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.892 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:23.892 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:24.458 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.458 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:24.458 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.458 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.458 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.458 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.458 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:24.458 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.717 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.283 00:13:25.283 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.283 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.283 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.541 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.541 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.541 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.541 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.541 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.541 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.541 { 00:13:25.541 "auth": { 00:13:25.541 "dhgroup": "ffdhe4096", 00:13:25.541 "digest": "sha256", 00:13:25.541 "state": "completed" 00:13:25.541 }, 00:13:25.541 "cntlid": 31, 00:13:25.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:25.542 "listen_address": { 00:13:25.542 "adrfam": "IPv4", 00:13:25.542 "traddr": "10.0.0.3", 00:13:25.542 "trsvcid": "4420", 00:13:25.542 "trtype": "TCP" 00:13:25.542 }, 00:13:25.542 "peer_address": { 00:13:25.542 "adrfam": "IPv4", 00:13:25.542 "traddr": "10.0.0.1", 00:13:25.542 "trsvcid": "50468", 00:13:25.542 "trtype": "TCP" 00:13:25.542 }, 00:13:25.542 "qid": 0, 00:13:25.542 "state": "enabled", 00:13:25.542 "thread": "nvmf_tgt_poll_group_000" 00:13:25.542 } 00:13:25.542 ]' 00:13:25.542 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.542 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.542 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.542 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:25.542 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.542 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.542 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.542 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.800 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:25.800 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:26.367 15:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:26.626 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.627 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.195 00:13:27.195 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.195 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.195 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.454 { 00:13:27.454 "auth": { 00:13:27.454 "dhgroup": "ffdhe6144", 00:13:27.454 "digest": "sha256", 00:13:27.454 "state": "completed" 00:13:27.454 }, 00:13:27.454 "cntlid": 33, 00:13:27.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:27.454 "listen_address": { 00:13:27.454 "adrfam": "IPv4", 00:13:27.454 "traddr": "10.0.0.3", 00:13:27.454 "trsvcid": "4420", 00:13:27.454 "trtype": "TCP" 00:13:27.454 }, 00:13:27.454 "peer_address": { 00:13:27.454 "adrfam": "IPv4", 00:13:27.454 "traddr": "10.0.0.1", 00:13:27.454 "trsvcid": "50484", 00:13:27.454 "trtype": "TCP" 00:13:27.454 }, 00:13:27.454 "qid": 0, 00:13:27.454 "state": "enabled", 00:13:27.454 "thread": "nvmf_tgt_poll_group_000" 00:13:27.454 } 00:13:27.454 ]' 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.454 15:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.022 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:28.022 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:28.657 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.657 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:28.657 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.657 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.657 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.657 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.657 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:28.657 15:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:28.657 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:28.657 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.657 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:28.657 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:28.657 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:28.657 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.658 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.658 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.658 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.658 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.658 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.658 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.658 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.225 00:13:29.225 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.225 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.225 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.484 { 00:13:29.484 "auth": { 00:13:29.484 "dhgroup": "ffdhe6144", 00:13:29.484 "digest": "sha256", 00:13:29.484 "state": "completed" 00:13:29.484 }, 00:13:29.484 "cntlid": 35, 00:13:29.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:29.484 "listen_address": { 00:13:29.484 "adrfam": "IPv4", 00:13:29.484 "traddr": "10.0.0.3", 00:13:29.484 "trsvcid": "4420", 00:13:29.484 "trtype": "TCP" 00:13:29.484 }, 00:13:29.484 "peer_address": { 00:13:29.484 "adrfam": "IPv4", 00:13:29.484 "traddr": "10.0.0.1", 00:13:29.484 "trsvcid": "50504", 00:13:29.484 "trtype": "TCP" 00:13:29.484 }, 00:13:29.484 "qid": 0, 00:13:29.484 "state": "enabled", 00:13:29.484 "thread": "nvmf_tgt_poll_group_000" 00:13:29.484 } 00:13:29.484 ]' 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.484 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.052 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:30.052 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:30.621 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.621 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:30.621 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.621 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.621 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.621 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.621 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:30.621 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.881 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.140 00:13:31.140 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.140 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.140 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.399 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.399 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.399 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.399 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.399 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.399 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.399 { 00:13:31.399 "auth": { 00:13:31.399 "dhgroup": "ffdhe6144", 00:13:31.399 "digest": "sha256", 00:13:31.399 "state": "completed" 00:13:31.399 }, 00:13:31.399 "cntlid": 37, 00:13:31.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:31.399 "listen_address": { 00:13:31.399 "adrfam": "IPv4", 00:13:31.399 "traddr": "10.0.0.3", 00:13:31.399 "trsvcid": "4420", 00:13:31.399 "trtype": "TCP" 00:13:31.399 }, 00:13:31.399 "peer_address": { 00:13:31.399 "adrfam": "IPv4", 00:13:31.399 "traddr": "10.0.0.1", 00:13:31.399 "trsvcid": "50538", 00:13:31.399 "trtype": "TCP" 00:13:31.399 }, 00:13:31.399 "qid": 0, 00:13:31.399 "state": "enabled", 00:13:31.399 "thread": "nvmf_tgt_poll_group_000" 00:13:31.399 } 00:13:31.399 ]' 00:13:31.659 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.659 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:31.659 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.659 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:31.659 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.659 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.659 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.659 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.918 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:31.919 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:32.492 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.493 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:32.493 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.493 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.493 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.493 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.493 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:32.493 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:32.752 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.320 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.320 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.320 { 00:13:33.320 "auth": { 00:13:33.320 "dhgroup": "ffdhe6144", 00:13:33.320 "digest": "sha256", 00:13:33.320 "state": "completed" 00:13:33.320 }, 00:13:33.320 "cntlid": 39, 00:13:33.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:33.320 "listen_address": { 00:13:33.320 "adrfam": "IPv4", 00:13:33.320 "traddr": "10.0.0.3", 00:13:33.320 "trsvcid": "4420", 00:13:33.320 "trtype": "TCP" 00:13:33.320 }, 00:13:33.320 "peer_address": { 00:13:33.320 "adrfam": "IPv4", 00:13:33.320 "traddr": "10.0.0.1", 00:13:33.320 "trsvcid": "50552", 00:13:33.320 "trtype": "TCP" 00:13:33.320 }, 00:13:33.320 "qid": 0, 00:13:33.320 "state": "enabled", 00:13:33.320 "thread": "nvmf_tgt_poll_group_000" 00:13:33.320 } 00:13:33.320 ]' 00:13:33.579 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.579 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.579 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.579 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:33.579 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.579 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.579 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.579 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.838 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:33.838 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:34.406 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.666 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.234 00:13:35.234 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.234 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.234 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.493 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.493 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.493 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.493 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.752 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.752 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.752 { 00:13:35.752 "auth": { 00:13:35.752 "dhgroup": "ffdhe8192", 00:13:35.752 "digest": "sha256", 00:13:35.752 "state": "completed" 00:13:35.752 }, 00:13:35.752 "cntlid": 41, 00:13:35.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:35.752 "listen_address": { 00:13:35.752 "adrfam": "IPv4", 00:13:35.752 "traddr": "10.0.0.3", 00:13:35.752 "trsvcid": "4420", 00:13:35.752 "trtype": "TCP" 00:13:35.752 }, 00:13:35.752 "peer_address": { 00:13:35.752 "adrfam": "IPv4", 00:13:35.752 "traddr": "10.0.0.1", 00:13:35.752 "trsvcid": "50570", 00:13:35.752 "trtype": "TCP" 00:13:35.752 }, 00:13:35.752 "qid": 0, 00:13:35.752 "state": "enabled", 00:13:35.752 "thread": "nvmf_tgt_poll_group_000" 00:13:35.752 } 00:13:35.752 ]' 00:13:35.752 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.752 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.752 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.752 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:35.752 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.752 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.752 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.752 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.011 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:36.011 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:36.632 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.632 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:36.632 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.632 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.632 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.632 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.632 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:36.632 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.891 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.459 00:13:37.459 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.459 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.459 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.719 { 00:13:37.719 "auth": { 00:13:37.719 "dhgroup": "ffdhe8192", 00:13:37.719 "digest": "sha256", 00:13:37.719 "state": "completed" 00:13:37.719 }, 00:13:37.719 "cntlid": 43, 00:13:37.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:37.719 "listen_address": { 00:13:37.719 "adrfam": "IPv4", 00:13:37.719 "traddr": "10.0.0.3", 00:13:37.719 "trsvcid": "4420", 00:13:37.719 "trtype": "TCP" 00:13:37.719 }, 00:13:37.719 "peer_address": { 00:13:37.719 "adrfam": "IPv4", 00:13:37.719 "traddr": "10.0.0.1", 00:13:37.719 "trsvcid": "50360", 00:13:37.719 "trtype": "TCP" 00:13:37.719 }, 00:13:37.719 "qid": 0, 00:13:37.719 "state": "enabled", 00:13:37.719 "thread": "nvmf_tgt_poll_group_000" 00:13:37.719 } 00:13:37.719 ]' 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.719 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.978 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.978 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.978 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:37.979 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:38.547 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.806 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:38.806 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.806 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.806 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.806 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.806 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:38.806 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.066 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.634 00:13:39.634 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.634 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.634 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.893 { 00:13:39.893 "auth": { 00:13:39.893 "dhgroup": "ffdhe8192", 00:13:39.893 "digest": "sha256", 00:13:39.893 "state": "completed" 00:13:39.893 }, 00:13:39.893 "cntlid": 45, 00:13:39.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:39.893 "listen_address": { 00:13:39.893 "adrfam": "IPv4", 00:13:39.893 "traddr": "10.0.0.3", 00:13:39.893 "trsvcid": "4420", 00:13:39.893 "trtype": "TCP" 00:13:39.893 }, 00:13:39.893 "peer_address": { 00:13:39.893 "adrfam": "IPv4", 00:13:39.893 "traddr": "10.0.0.1", 00:13:39.893 "trsvcid": "50380", 00:13:39.893 "trtype": "TCP" 00:13:39.893 }, 00:13:39.893 "qid": 0, 00:13:39.893 "state": "enabled", 00:13:39.893 "thread": "nvmf_tgt_poll_group_000" 00:13:39.893 } 00:13:39.893 ]' 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.893 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.152 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:40.152 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:40.718 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.718 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:40.718 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.718 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.718 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.718 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.718 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:40.718 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:40.977 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.546 00:13:41.546 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.546 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.546 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.804 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.804 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.804 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.804 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.063 { 00:13:42.063 "auth": { 00:13:42.063 "dhgroup": "ffdhe8192", 00:13:42.063 "digest": "sha256", 00:13:42.063 "state": "completed" 00:13:42.063 }, 00:13:42.063 "cntlid": 47, 00:13:42.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:42.063 "listen_address": { 00:13:42.063 "adrfam": "IPv4", 00:13:42.063 "traddr": "10.0.0.3", 00:13:42.063 "trsvcid": "4420", 00:13:42.063 "trtype": "TCP" 00:13:42.063 }, 00:13:42.063 "peer_address": { 00:13:42.063 "adrfam": "IPv4", 00:13:42.063 "traddr": "10.0.0.1", 00:13:42.063 "trsvcid": "50408", 00:13:42.063 "trtype": "TCP" 00:13:42.063 }, 00:13:42.063 "qid": 0, 00:13:42.063 "state": "enabled", 00:13:42.063 "thread": "nvmf_tgt_poll_group_000" 00:13:42.063 } 00:13:42.063 ]' 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.063 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.322 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:42.323 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:42.890 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:43.149 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:43.149 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.150 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.408 00:13:43.408 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.408 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.408 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.668 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.668 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.668 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.668 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.668 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.668 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.668 { 00:13:43.668 "auth": { 00:13:43.668 "dhgroup": "null", 00:13:43.668 "digest": "sha384", 00:13:43.668 "state": "completed" 00:13:43.668 }, 00:13:43.668 "cntlid": 49, 00:13:43.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:43.668 "listen_address": { 00:13:43.668 "adrfam": "IPv4", 00:13:43.668 "traddr": "10.0.0.3", 00:13:43.668 "trsvcid": "4420", 00:13:43.668 "trtype": "TCP" 00:13:43.668 }, 00:13:43.668 "peer_address": { 00:13:43.668 "adrfam": "IPv4", 00:13:43.668 "traddr": "10.0.0.1", 00:13:43.668 "trsvcid": "50436", 00:13:43.668 "trtype": "TCP" 00:13:43.668 }, 00:13:43.668 "qid": 0, 00:13:43.668 "state": "enabled", 00:13:43.668 "thread": "nvmf_tgt_poll_group_000" 00:13:43.668 } 00:13:43.668 ]' 00:13:43.668 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.927 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.927 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.927 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:43.927 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.927 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.927 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.927 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.187 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:44.187 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:44.768 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.768 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:44.768 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.768 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.768 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.768 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.768 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:44.768 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:45.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:45.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:45.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:45.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.040 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.040 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.040 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.040 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.040 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.040 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.040 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.298 00:13:45.298 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.298 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.298 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.866 { 00:13:45.866 "auth": { 00:13:45.866 "dhgroup": "null", 00:13:45.866 "digest": "sha384", 00:13:45.866 "state": "completed" 00:13:45.866 }, 00:13:45.866 "cntlid": 51, 00:13:45.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:45.866 "listen_address": { 00:13:45.866 "adrfam": "IPv4", 00:13:45.866 "traddr": "10.0.0.3", 00:13:45.866 "trsvcid": "4420", 00:13:45.866 "trtype": "TCP" 00:13:45.866 }, 00:13:45.866 "peer_address": { 00:13:45.866 "adrfam": "IPv4", 00:13:45.866 "traddr": "10.0.0.1", 00:13:45.866 "trsvcid": "50456", 00:13:45.866 "trtype": "TCP" 00:13:45.866 }, 00:13:45.866 "qid": 0, 00:13:45.866 "state": "enabled", 00:13:45.866 "thread": "nvmf_tgt_poll_group_000" 00:13:45.866 } 00:13:45.866 ]' 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.866 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.124 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:46.124 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:46.691 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.692 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:46.692 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.692 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.692 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.692 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.692 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:46.692 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.950 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.209 00:13:47.209 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.209 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.468 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.468 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.468 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.468 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.468 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.726 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.726 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.726 { 00:13:47.727 "auth": { 00:13:47.727 "dhgroup": "null", 00:13:47.727 "digest": "sha384", 00:13:47.727 "state": "completed" 00:13:47.727 }, 00:13:47.727 "cntlid": 53, 00:13:47.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:47.727 "listen_address": { 00:13:47.727 "adrfam": "IPv4", 00:13:47.727 "traddr": "10.0.0.3", 00:13:47.727 "trsvcid": "4420", 00:13:47.727 "trtype": "TCP" 00:13:47.727 }, 00:13:47.727 "peer_address": { 00:13:47.727 "adrfam": "IPv4", 00:13:47.727 "traddr": "10.0.0.1", 00:13:47.727 "trsvcid": "54278", 00:13:47.727 "trtype": "TCP" 00:13:47.727 }, 00:13:47.727 "qid": 0, 00:13:47.727 "state": "enabled", 00:13:47.727 "thread": "nvmf_tgt_poll_group_000" 00:13:47.727 } 00:13:47.727 ]' 00:13:47.727 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.727 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.727 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.727 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:47.727 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.727 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.727 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.727 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.986 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:47.986 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:48.553 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.553 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:48.553 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.553 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.553 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.553 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.553 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:48.553 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:48.812 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.813 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.403 00:13:49.403 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.403 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.403 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.403 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.403 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.403 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.403 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.662 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.662 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.662 { 00:13:49.662 "auth": { 00:13:49.662 "dhgroup": "null", 00:13:49.662 "digest": "sha384", 00:13:49.662 "state": "completed" 00:13:49.662 }, 00:13:49.662 "cntlid": 55, 00:13:49.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:49.662 "listen_address": { 00:13:49.662 "adrfam": "IPv4", 00:13:49.662 "traddr": "10.0.0.3", 00:13:49.662 "trsvcid": "4420", 00:13:49.662 "trtype": "TCP" 00:13:49.662 }, 00:13:49.662 "peer_address": { 00:13:49.662 "adrfam": "IPv4", 00:13:49.662 "traddr": "10.0.0.1", 00:13:49.662 "trsvcid": "54306", 00:13:49.662 "trtype": "TCP" 00:13:49.662 }, 00:13:49.662 "qid": 0, 00:13:49.662 "state": "enabled", 00:13:49.662 "thread": "nvmf_tgt_poll_group_000" 00:13:49.662 } 00:13:49.662 ]' 00:13:49.662 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.662 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.662 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.662 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:49.662 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.662 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.662 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.662 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.921 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:49.921 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:50.490 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.749 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.008 00:13:51.268 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.268 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.268 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.528 { 00:13:51.528 "auth": { 00:13:51.528 "dhgroup": "ffdhe2048", 00:13:51.528 "digest": "sha384", 00:13:51.528 "state": "completed" 00:13:51.528 }, 00:13:51.528 "cntlid": 57, 00:13:51.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:51.528 "listen_address": { 00:13:51.528 "adrfam": "IPv4", 00:13:51.528 "traddr": "10.0.0.3", 00:13:51.528 "trsvcid": "4420", 00:13:51.528 "trtype": "TCP" 00:13:51.528 }, 00:13:51.528 "peer_address": { 00:13:51.528 "adrfam": "IPv4", 00:13:51.528 "traddr": "10.0.0.1", 00:13:51.528 "trsvcid": "54336", 00:13:51.528 "trtype": "TCP" 00:13:51.528 }, 00:13:51.528 "qid": 0, 00:13:51.528 "state": "enabled", 00:13:51.528 "thread": "nvmf_tgt_poll_group_000" 00:13:51.528 } 00:13:51.528 ]' 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.528 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.788 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:51.788 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:52.355 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.355 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:52.355 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.355 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.355 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.355 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.356 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:52.356 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:52.614 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:52.614 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.614 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.614 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:52.614 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:52.614 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.614 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.614 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.615 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.615 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.615 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.615 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.615 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.182 00:13:53.182 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.182 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.182 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.442 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.442 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.442 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.442 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.442 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.442 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.442 { 00:13:53.442 "auth": { 00:13:53.442 "dhgroup": "ffdhe2048", 00:13:53.442 "digest": "sha384", 00:13:53.442 "state": "completed" 00:13:53.442 }, 00:13:53.442 "cntlid": 59, 00:13:53.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:53.442 "listen_address": { 00:13:53.442 "adrfam": "IPv4", 00:13:53.443 "traddr": "10.0.0.3", 00:13:53.443 "trsvcid": "4420", 00:13:53.443 "trtype": "TCP" 00:13:53.443 }, 00:13:53.443 "peer_address": { 00:13:53.443 "adrfam": "IPv4", 00:13:53.443 "traddr": "10.0.0.1", 00:13:53.443 "trsvcid": "54354", 00:13:53.443 "trtype": "TCP" 00:13:53.443 }, 00:13:53.443 "qid": 0, 00:13:53.443 "state": "enabled", 00:13:53.443 "thread": "nvmf_tgt_poll_group_000" 00:13:53.443 } 00:13:53.443 ]' 00:13:53.443 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.443 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.443 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.443 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:53.443 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.443 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.443 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.443 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.702 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:53.702 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:13:54.269 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.269 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:54.269 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.269 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.269 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.269 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.269 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:54.269 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.527 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:55.095 00:13:55.095 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.095 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.095 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.355 { 00:13:55.355 "auth": { 00:13:55.355 "dhgroup": "ffdhe2048", 00:13:55.355 "digest": "sha384", 00:13:55.355 "state": "completed" 00:13:55.355 }, 00:13:55.355 "cntlid": 61, 00:13:55.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:55.355 "listen_address": { 00:13:55.355 "adrfam": "IPv4", 00:13:55.355 "traddr": "10.0.0.3", 00:13:55.355 "trsvcid": "4420", 00:13:55.355 "trtype": "TCP" 00:13:55.355 }, 00:13:55.355 "peer_address": { 00:13:55.355 "adrfam": "IPv4", 00:13:55.355 "traddr": "10.0.0.1", 00:13:55.355 "trsvcid": "54392", 00:13:55.355 "trtype": "TCP" 00:13:55.355 }, 00:13:55.355 "qid": 0, 00:13:55.355 "state": "enabled", 00:13:55.355 "thread": "nvmf_tgt_poll_group_000" 00:13:55.355 } 00:13:55.355 ]' 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.355 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.615 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:55.615 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:13:56.183 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.183 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:56.183 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.183 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.442 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.702 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.702 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:56.702 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.702 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.961 00:13:56.961 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.961 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.961 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.221 { 00:13:57.221 "auth": { 00:13:57.221 "dhgroup": "ffdhe2048", 00:13:57.221 "digest": "sha384", 00:13:57.221 "state": "completed" 00:13:57.221 }, 00:13:57.221 "cntlid": 63, 00:13:57.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:57.221 "listen_address": { 00:13:57.221 "adrfam": "IPv4", 00:13:57.221 "traddr": "10.0.0.3", 00:13:57.221 "trsvcid": "4420", 00:13:57.221 "trtype": "TCP" 00:13:57.221 }, 00:13:57.221 "peer_address": { 00:13:57.221 "adrfam": "IPv4", 00:13:57.221 "traddr": "10.0.0.1", 00:13:57.221 "trsvcid": "54408", 00:13:57.221 "trtype": "TCP" 00:13:57.221 }, 00:13:57.221 "qid": 0, 00:13:57.221 "state": "enabled", 00:13:57.221 "thread": "nvmf_tgt_poll_group_000" 00:13:57.221 } 00:13:57.221 ]' 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:57.221 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.480 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.480 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.480 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.739 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:57.739 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.308 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.567 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.567 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.567 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.567 15:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.825 00:13:58.825 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.825 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.825 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.083 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.083 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.083 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.083 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.083 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.083 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.083 { 00:13:59.083 "auth": { 00:13:59.083 "dhgroup": "ffdhe3072", 00:13:59.083 "digest": "sha384", 00:13:59.083 "state": "completed" 00:13:59.083 }, 00:13:59.083 "cntlid": 65, 00:13:59.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:13:59.083 "listen_address": { 00:13:59.083 "adrfam": "IPv4", 00:13:59.083 "traddr": "10.0.0.3", 00:13:59.083 "trsvcid": "4420", 00:13:59.083 "trtype": "TCP" 00:13:59.084 }, 00:13:59.084 "peer_address": { 00:13:59.084 "adrfam": "IPv4", 00:13:59.084 "traddr": "10.0.0.1", 00:13:59.084 "trsvcid": "49962", 00:13:59.084 "trtype": "TCP" 00:13:59.084 }, 00:13:59.084 "qid": 0, 00:13:59.084 "state": "enabled", 00:13:59.084 "thread": "nvmf_tgt_poll_group_000" 00:13:59.084 } 00:13:59.084 ]' 00:13:59.084 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.084 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.084 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.084 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:59.084 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.084 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.084 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.084 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.343 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:59.343 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:13:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:13:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.171 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.740 00:14:00.740 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.740 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.740 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.999 { 00:14:00.999 "auth": { 00:14:00.999 "dhgroup": "ffdhe3072", 00:14:00.999 "digest": "sha384", 00:14:00.999 "state": "completed" 00:14:00.999 }, 00:14:00.999 "cntlid": 67, 00:14:00.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:00.999 "listen_address": { 00:14:00.999 "adrfam": "IPv4", 00:14:00.999 "traddr": "10.0.0.3", 00:14:00.999 "trsvcid": "4420", 00:14:00.999 "trtype": "TCP" 00:14:00.999 }, 00:14:00.999 "peer_address": { 00:14:00.999 "adrfam": "IPv4", 00:14:00.999 "traddr": "10.0.0.1", 00:14:00.999 "trsvcid": "50002", 00:14:00.999 "trtype": "TCP" 00:14:00.999 }, 00:14:00.999 "qid": 0, 00:14:00.999 "state": "enabled", 00:14:00.999 "thread": "nvmf_tgt_poll_group_000" 00:14:00.999 } 00:14:00.999 ]' 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.999 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.259 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:01.259 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:01.827 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.827 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:01.827 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.828 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.828 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.828 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.828 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:01.828 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.396 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.655 00:14:02.655 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.655 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.655 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.943 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.943 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.943 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.943 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.943 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.943 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.943 { 00:14:02.943 "auth": { 00:14:02.943 "dhgroup": "ffdhe3072", 00:14:02.943 "digest": "sha384", 00:14:02.943 "state": "completed" 00:14:02.943 }, 00:14:02.943 "cntlid": 69, 00:14:02.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:02.943 "listen_address": { 00:14:02.943 "adrfam": "IPv4", 00:14:02.943 "traddr": "10.0.0.3", 00:14:02.943 "trsvcid": "4420", 00:14:02.943 "trtype": "TCP" 00:14:02.943 }, 00:14:02.943 "peer_address": { 00:14:02.943 "adrfam": "IPv4", 00:14:02.944 "traddr": "10.0.0.1", 00:14:02.944 "trsvcid": "50032", 00:14:02.944 "trtype": "TCP" 00:14:02.944 }, 00:14:02.944 "qid": 0, 00:14:02.944 "state": "enabled", 00:14:02.944 "thread": "nvmf_tgt_poll_group_000" 00:14:02.944 } 00:14:02.944 ]' 00:14:02.944 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.944 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.944 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.944 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:02.944 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.944 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.944 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.944 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.201 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:03.201 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:03.769 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.028 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:04.028 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.028 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.028 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.028 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.028 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:04.028 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.287 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.545 00:14:04.545 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.545 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.545 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.804 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.804 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.804 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.804 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.805 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.805 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.805 { 00:14:04.805 "auth": { 00:14:04.805 "dhgroup": "ffdhe3072", 00:14:04.805 "digest": "sha384", 00:14:04.805 "state": "completed" 00:14:04.805 }, 00:14:04.805 "cntlid": 71, 00:14:04.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:04.805 "listen_address": { 00:14:04.805 "adrfam": "IPv4", 00:14:04.805 "traddr": "10.0.0.3", 00:14:04.805 "trsvcid": "4420", 00:14:04.805 "trtype": "TCP" 00:14:04.805 }, 00:14:04.805 "peer_address": { 00:14:04.805 "adrfam": "IPv4", 00:14:04.805 "traddr": "10.0.0.1", 00:14:04.805 "trsvcid": "50056", 00:14:04.805 "trtype": "TCP" 00:14:04.805 }, 00:14:04.805 "qid": 0, 00:14:04.805 "state": "enabled", 00:14:04.805 "thread": "nvmf_tgt_poll_group_000" 00:14:04.805 } 00:14:04.805 ]' 00:14:04.805 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.805 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.805 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.064 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:05.064 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.064 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.064 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.064 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.323 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:05.323 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.892 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.151 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.410 00:14:06.669 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.669 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.669 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.928 { 00:14:06.928 "auth": { 00:14:06.928 "dhgroup": "ffdhe4096", 00:14:06.928 "digest": "sha384", 00:14:06.928 "state": "completed" 00:14:06.928 }, 00:14:06.928 "cntlid": 73, 00:14:06.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:06.928 "listen_address": { 00:14:06.928 "adrfam": "IPv4", 00:14:06.928 "traddr": "10.0.0.3", 00:14:06.928 "trsvcid": "4420", 00:14:06.928 "trtype": "TCP" 00:14:06.928 }, 00:14:06.928 "peer_address": { 00:14:06.928 "adrfam": "IPv4", 00:14:06.928 "traddr": "10.0.0.1", 00:14:06.928 "trsvcid": "50076", 00:14:06.928 "trtype": "TCP" 00:14:06.928 }, 00:14:06.928 "qid": 0, 00:14:06.928 "state": "enabled", 00:14:06.928 "thread": "nvmf_tgt_poll_group_000" 00:14:06.928 } 00:14:06.928 ]' 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.928 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.187 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:07.187 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:07.754 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.754 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:07.754 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.754 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.754 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.754 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.754 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:07.754 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.322 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.580 00:14:08.580 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.580 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.580 15:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.839 { 00:14:08.839 "auth": { 00:14:08.839 "dhgroup": "ffdhe4096", 00:14:08.839 "digest": "sha384", 00:14:08.839 "state": "completed" 00:14:08.839 }, 00:14:08.839 "cntlid": 75, 00:14:08.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:08.839 "listen_address": { 00:14:08.839 "adrfam": "IPv4", 00:14:08.839 "traddr": "10.0.0.3", 00:14:08.839 "trsvcid": "4420", 00:14:08.839 "trtype": "TCP" 00:14:08.839 }, 00:14:08.839 "peer_address": { 00:14:08.839 "adrfam": "IPv4", 00:14:08.839 "traddr": "10.0.0.1", 00:14:08.839 "trsvcid": "49000", 00:14:08.839 "trtype": "TCP" 00:14:08.839 }, 00:14:08.839 "qid": 0, 00:14:08.839 "state": "enabled", 00:14:08.839 "thread": "nvmf_tgt_poll_group_000" 00:14:08.839 } 00:14:08.839 ]' 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.839 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.098 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:09.098 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:09.666 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.666 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:09.666 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.666 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.666 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.666 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.666 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:09.666 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.925 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.492 00:14:10.492 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.492 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.492 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.752 { 00:14:10.752 "auth": { 00:14:10.752 "dhgroup": "ffdhe4096", 00:14:10.752 "digest": "sha384", 00:14:10.752 "state": "completed" 00:14:10.752 }, 00:14:10.752 "cntlid": 77, 00:14:10.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:10.752 "listen_address": { 00:14:10.752 "adrfam": "IPv4", 00:14:10.752 "traddr": "10.0.0.3", 00:14:10.752 "trsvcid": "4420", 00:14:10.752 "trtype": "TCP" 00:14:10.752 }, 00:14:10.752 "peer_address": { 00:14:10.752 "adrfam": "IPv4", 00:14:10.752 "traddr": "10.0.0.1", 00:14:10.752 "trsvcid": "49022", 00:14:10.752 "trtype": "TCP" 00:14:10.752 }, 00:14:10.752 "qid": 0, 00:14:10.752 "state": "enabled", 00:14:10.752 "thread": "nvmf_tgt_poll_group_000" 00:14:10.752 } 00:14:10.752 ]' 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.752 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.011 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:11.011 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:11.578 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.578 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:11.578 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.578 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.578 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.578 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.578 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:11.578 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.837 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.406 00:14:12.406 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.406 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.406 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.666 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.666 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.666 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.666 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.666 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.666 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.666 { 00:14:12.666 "auth": { 00:14:12.666 "dhgroup": "ffdhe4096", 00:14:12.666 "digest": "sha384", 00:14:12.666 "state": "completed" 00:14:12.666 }, 00:14:12.666 "cntlid": 79, 00:14:12.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:12.666 "listen_address": { 00:14:12.666 "adrfam": "IPv4", 00:14:12.666 "traddr": "10.0.0.3", 00:14:12.666 "trsvcid": "4420", 00:14:12.666 "trtype": "TCP" 00:14:12.666 }, 00:14:12.666 "peer_address": { 00:14:12.666 "adrfam": "IPv4", 00:14:12.666 "traddr": "10.0.0.1", 00:14:12.666 "trsvcid": "49052", 00:14:12.666 "trtype": "TCP" 00:14:12.666 }, 00:14:12.666 "qid": 0, 00:14:12.666 "state": "enabled", 00:14:12.666 "thread": "nvmf_tgt_poll_group_000" 00:14:12.666 } 00:14:12.666 ]' 00:14:12.666 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.666 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.666 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.666 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:12.666 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.666 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.666 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.666 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.925 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:12.925 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:13.493 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.493 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:13.493 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.493 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.493 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.493 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.494 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.494 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.494 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.752 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.320 00:14:14.320 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.320 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.320 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.578 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.578 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.578 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.578 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.578 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.578 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.578 { 00:14:14.578 "auth": { 00:14:14.578 "dhgroup": "ffdhe6144", 00:14:14.578 "digest": "sha384", 00:14:14.578 "state": "completed" 00:14:14.578 }, 00:14:14.578 "cntlid": 81, 00:14:14.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:14.578 "listen_address": { 00:14:14.578 "adrfam": "IPv4", 00:14:14.578 "traddr": "10.0.0.3", 00:14:14.578 "trsvcid": "4420", 00:14:14.578 "trtype": "TCP" 00:14:14.578 }, 00:14:14.578 "peer_address": { 00:14:14.578 "adrfam": "IPv4", 00:14:14.578 "traddr": "10.0.0.1", 00:14:14.578 "trsvcid": "49066", 00:14:14.578 "trtype": "TCP" 00:14:14.578 }, 00:14:14.578 "qid": 0, 00:14:14.578 "state": "enabled", 00:14:14.578 "thread": "nvmf_tgt_poll_group_000" 00:14:14.578 } 00:14:14.579 ]' 00:14:14.579 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.579 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.579 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.579 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:14.579 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.837 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.837 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.837 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.095 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:15.095 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:15.663 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.663 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:15.663 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.663 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.663 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.663 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.663 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:15.663 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.923 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.492 00:14:16.492 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.492 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.492 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.492 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.492 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.492 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.492 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.750 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.750 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.750 { 00:14:16.750 "auth": { 00:14:16.750 "dhgroup": "ffdhe6144", 00:14:16.750 "digest": "sha384", 00:14:16.750 "state": "completed" 00:14:16.750 }, 00:14:16.750 "cntlid": 83, 00:14:16.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:16.750 "listen_address": { 00:14:16.750 "adrfam": "IPv4", 00:14:16.750 "traddr": "10.0.0.3", 00:14:16.750 "trsvcid": "4420", 00:14:16.750 "trtype": "TCP" 00:14:16.750 }, 00:14:16.750 "peer_address": { 00:14:16.750 "adrfam": "IPv4", 00:14:16.750 "traddr": "10.0.0.1", 00:14:16.750 "trsvcid": "49098", 00:14:16.750 "trtype": "TCP" 00:14:16.750 }, 00:14:16.750 "qid": 0, 00:14:16.750 "state": "enabled", 00:14:16.750 "thread": "nvmf_tgt_poll_group_000" 00:14:16.750 } 00:14:16.750 ]' 00:14:16.750 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.750 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.750 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.750 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:16.750 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.750 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.751 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.751 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.010 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:17.010 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:17.578 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.578 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:17.578 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.578 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.578 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.578 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.578 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:17.578 15:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.838 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.406 00:14:18.406 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.406 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.406 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.666 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.666 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.666 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.666 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.666 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.666 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.666 { 00:14:18.666 "auth": { 00:14:18.666 "dhgroup": "ffdhe6144", 00:14:18.666 "digest": "sha384", 00:14:18.666 "state": "completed" 00:14:18.666 }, 00:14:18.666 "cntlid": 85, 00:14:18.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:18.666 "listen_address": { 00:14:18.666 "adrfam": "IPv4", 00:14:18.666 "traddr": "10.0.0.3", 00:14:18.666 "trsvcid": "4420", 00:14:18.666 "trtype": "TCP" 00:14:18.666 }, 00:14:18.666 "peer_address": { 00:14:18.666 "adrfam": "IPv4", 00:14:18.666 "traddr": "10.0.0.1", 00:14:18.666 "trsvcid": "53742", 00:14:18.666 "trtype": "TCP" 00:14:18.666 }, 00:14:18.666 "qid": 0, 00:14:18.666 "state": "enabled", 00:14:18.666 "thread": "nvmf_tgt_poll_group_000" 00:14:18.666 } 00:14:18.666 ]' 00:14:18.666 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.666 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.667 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.667 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:18.667 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.667 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.667 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.667 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.926 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:18.926 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:19.866 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.866 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.435 00:14:20.435 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.435 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.435 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.694 { 00:14:20.694 "auth": { 00:14:20.694 "dhgroup": "ffdhe6144", 00:14:20.694 "digest": "sha384", 00:14:20.694 "state": "completed" 00:14:20.694 }, 00:14:20.694 "cntlid": 87, 00:14:20.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:20.694 "listen_address": { 00:14:20.694 "adrfam": "IPv4", 00:14:20.694 "traddr": "10.0.0.3", 00:14:20.694 "trsvcid": "4420", 00:14:20.694 "trtype": "TCP" 00:14:20.694 }, 00:14:20.694 "peer_address": { 00:14:20.694 "adrfam": "IPv4", 00:14:20.694 "traddr": "10.0.0.1", 00:14:20.694 "trsvcid": "53788", 00:14:20.694 "trtype": "TCP" 00:14:20.694 }, 00:14:20.694 "qid": 0, 00:14:20.694 "state": "enabled", 00:14:20.694 "thread": "nvmf_tgt_poll_group_000" 00:14:20.694 } 00:14:20.694 ]' 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:20.694 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.953 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.953 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.953 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.212 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:21.212 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:21.780 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.039 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.607 00:14:22.607 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.607 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.607 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.867 { 00:14:22.867 "auth": { 00:14:22.867 "dhgroup": "ffdhe8192", 00:14:22.867 "digest": "sha384", 00:14:22.867 "state": "completed" 00:14:22.867 }, 00:14:22.867 "cntlid": 89, 00:14:22.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:22.867 "listen_address": { 00:14:22.867 "adrfam": "IPv4", 00:14:22.867 "traddr": "10.0.0.3", 00:14:22.867 "trsvcid": "4420", 00:14:22.867 "trtype": "TCP" 00:14:22.867 }, 00:14:22.867 "peer_address": { 00:14:22.867 "adrfam": "IPv4", 00:14:22.867 "traddr": "10.0.0.1", 00:14:22.867 "trsvcid": "53818", 00:14:22.867 "trtype": "TCP" 00:14:22.867 }, 00:14:22.867 "qid": 0, 00:14:22.867 "state": "enabled", 00:14:22.867 "thread": "nvmf_tgt_poll_group_000" 00:14:22.867 } 00:14:22.867 ]' 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.867 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.128 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:23.128 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:23.695 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.695 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:23.695 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.695 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.970 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.271 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.271 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.271 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.271 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.531 00:14:24.791 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.791 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.791 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.050 { 00:14:25.050 "auth": { 00:14:25.050 "dhgroup": "ffdhe8192", 00:14:25.050 "digest": "sha384", 00:14:25.050 "state": "completed" 00:14:25.050 }, 00:14:25.050 "cntlid": 91, 00:14:25.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:25.050 "listen_address": { 00:14:25.050 "adrfam": "IPv4", 00:14:25.050 "traddr": "10.0.0.3", 00:14:25.050 "trsvcid": "4420", 00:14:25.050 "trtype": "TCP" 00:14:25.050 }, 00:14:25.050 "peer_address": { 00:14:25.050 "adrfam": "IPv4", 00:14:25.050 "traddr": "10.0.0.1", 00:14:25.050 "trsvcid": "53846", 00:14:25.050 "trtype": "TCP" 00:14:25.050 }, 00:14:25.050 "qid": 0, 00:14:25.050 "state": "enabled", 00:14:25.050 "thread": "nvmf_tgt_poll_group_000" 00:14:25.050 } 00:14:25.050 ]' 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.050 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.310 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:25.310 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:25.878 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.878 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:25.878 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.878 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.878 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.878 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.878 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:25.878 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.137 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.704 00:14:26.964 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.964 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.964 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.964 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.223 { 00:14:27.223 "auth": { 00:14:27.223 "dhgroup": "ffdhe8192", 00:14:27.223 "digest": "sha384", 00:14:27.223 "state": "completed" 00:14:27.223 }, 00:14:27.223 "cntlid": 93, 00:14:27.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:27.223 "listen_address": { 00:14:27.223 "adrfam": "IPv4", 00:14:27.223 "traddr": "10.0.0.3", 00:14:27.223 "trsvcid": "4420", 00:14:27.223 "trtype": "TCP" 00:14:27.223 }, 00:14:27.223 "peer_address": { 00:14:27.223 "adrfam": "IPv4", 00:14:27.223 "traddr": "10.0.0.1", 00:14:27.223 "trsvcid": "53878", 00:14:27.223 "trtype": "TCP" 00:14:27.223 }, 00:14:27.223 "qid": 0, 00:14:27.223 "state": "enabled", 00:14:27.223 "thread": "nvmf_tgt_poll_group_000" 00:14:27.223 } 00:14:27.223 ]' 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.223 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.483 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:27.483 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:28.051 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.051 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:28.051 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.051 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.051 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.051 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.051 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:28.051 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:28.310 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:28.877 00:14:29.136 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.136 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.136 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.395 { 00:14:29.395 "auth": { 00:14:29.395 "dhgroup": "ffdhe8192", 00:14:29.395 "digest": "sha384", 00:14:29.395 "state": "completed" 00:14:29.395 }, 00:14:29.395 "cntlid": 95, 00:14:29.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:29.395 "listen_address": { 00:14:29.395 "adrfam": "IPv4", 00:14:29.395 "traddr": "10.0.0.3", 00:14:29.395 "trsvcid": "4420", 00:14:29.395 "trtype": "TCP" 00:14:29.395 }, 00:14:29.395 "peer_address": { 00:14:29.395 "adrfam": "IPv4", 00:14:29.395 "traddr": "10.0.0.1", 00:14:29.395 "trsvcid": "50026", 00:14:29.395 "trtype": "TCP" 00:14:29.395 }, 00:14:29.395 "qid": 0, 00:14:29.395 "state": "enabled", 00:14:29.395 "thread": "nvmf_tgt_poll_group_000" 00:14:29.395 } 00:14:29.395 ]' 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.395 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.654 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:29.654 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:30.222 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.222 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:30.222 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.222 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.482 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.482 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:30.482 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.482 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.482 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:30.482 15:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.744 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.004 00:14:31.004 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.004 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.004 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.263 { 00:14:31.263 "auth": { 00:14:31.263 "dhgroup": "null", 00:14:31.263 "digest": "sha512", 00:14:31.263 "state": "completed" 00:14:31.263 }, 00:14:31.263 "cntlid": 97, 00:14:31.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:31.263 "listen_address": { 00:14:31.263 "adrfam": "IPv4", 00:14:31.263 "traddr": "10.0.0.3", 00:14:31.263 "trsvcid": "4420", 00:14:31.263 "trtype": "TCP" 00:14:31.263 }, 00:14:31.263 "peer_address": { 00:14:31.263 "adrfam": "IPv4", 00:14:31.263 "traddr": "10.0.0.1", 00:14:31.263 "trsvcid": "50058", 00:14:31.263 "trtype": "TCP" 00:14:31.263 }, 00:14:31.263 "qid": 0, 00:14:31.263 "state": "enabled", 00:14:31.263 "thread": "nvmf_tgt_poll_group_000" 00:14:31.263 } 00:14:31.263 ]' 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:31.263 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.522 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.522 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.522 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.781 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:31.781 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:32.349 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.349 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:32.349 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.349 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.349 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.349 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.349 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:32.349 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.607 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.864 00:14:32.864 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.864 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.864 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.124 { 00:14:33.124 "auth": { 00:14:33.124 "dhgroup": "null", 00:14:33.124 "digest": "sha512", 00:14:33.124 "state": "completed" 00:14:33.124 }, 00:14:33.124 "cntlid": 99, 00:14:33.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:33.124 "listen_address": { 00:14:33.124 "adrfam": "IPv4", 00:14:33.124 "traddr": "10.0.0.3", 00:14:33.124 "trsvcid": "4420", 00:14:33.124 "trtype": "TCP" 00:14:33.124 }, 00:14:33.124 "peer_address": { 00:14:33.124 "adrfam": "IPv4", 00:14:33.124 "traddr": "10.0.0.1", 00:14:33.124 "trsvcid": "50078", 00:14:33.124 "trtype": "TCP" 00:14:33.124 }, 00:14:33.124 "qid": 0, 00:14:33.124 "state": "enabled", 00:14:33.124 "thread": "nvmf_tgt_poll_group_000" 00:14:33.124 } 00:14:33.124 ]' 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.124 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.382 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:33.382 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.382 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.382 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.382 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.641 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:33.641 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:34.207 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.207 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:34.207 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.207 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.207 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.207 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.207 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:34.207 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.465 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.722 00:14:34.722 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.722 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.722 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.980 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.980 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.980 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.980 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.238 { 00:14:35.238 "auth": { 00:14:35.238 "dhgroup": "null", 00:14:35.238 "digest": "sha512", 00:14:35.238 "state": "completed" 00:14:35.238 }, 00:14:35.238 "cntlid": 101, 00:14:35.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:35.238 "listen_address": { 00:14:35.238 "adrfam": "IPv4", 00:14:35.238 "traddr": "10.0.0.3", 00:14:35.238 "trsvcid": "4420", 00:14:35.238 "trtype": "TCP" 00:14:35.238 }, 00:14:35.238 "peer_address": { 00:14:35.238 "adrfam": "IPv4", 00:14:35.238 "traddr": "10.0.0.1", 00:14:35.238 "trsvcid": "50116", 00:14:35.238 "trtype": "TCP" 00:14:35.238 }, 00:14:35.238 "qid": 0, 00:14:35.238 "state": "enabled", 00:14:35.238 "thread": "nvmf_tgt_poll_group_000" 00:14:35.238 } 00:14:35.238 ]' 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.238 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.497 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:35.497 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:36.063 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.063 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:36.063 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.063 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.063 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.063 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.063 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:36.063 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.322 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.582 00:14:36.841 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.841 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.841 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.100 { 00:14:37.100 "auth": { 00:14:37.100 "dhgroup": "null", 00:14:37.100 "digest": "sha512", 00:14:37.100 "state": "completed" 00:14:37.100 }, 00:14:37.100 "cntlid": 103, 00:14:37.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:37.100 "listen_address": { 00:14:37.100 "adrfam": "IPv4", 00:14:37.100 "traddr": "10.0.0.3", 00:14:37.100 "trsvcid": "4420", 00:14:37.100 "trtype": "TCP" 00:14:37.100 }, 00:14:37.100 "peer_address": { 00:14:37.100 "adrfam": "IPv4", 00:14:37.100 "traddr": "10.0.0.1", 00:14:37.100 "trsvcid": "50136", 00:14:37.100 "trtype": "TCP" 00:14:37.100 }, 00:14:37.100 "qid": 0, 00:14:37.100 "state": "enabled", 00:14:37.100 "thread": "nvmf_tgt_poll_group_000" 00:14:37.100 } 00:14:37.100 ]' 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.100 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.359 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:37.359 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.927 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.186 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.446 00:14:38.446 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.446 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.446 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.705 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.705 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.705 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.705 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.705 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.705 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.705 { 00:14:38.705 "auth": { 00:14:38.705 "dhgroup": "ffdhe2048", 00:14:38.705 "digest": "sha512", 00:14:38.705 "state": "completed" 00:14:38.705 }, 00:14:38.705 "cntlid": 105, 00:14:38.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:38.705 "listen_address": { 00:14:38.705 "adrfam": "IPv4", 00:14:38.705 "traddr": "10.0.0.3", 00:14:38.705 "trsvcid": "4420", 00:14:38.705 "trtype": "TCP" 00:14:38.705 }, 00:14:38.705 "peer_address": { 00:14:38.705 "adrfam": "IPv4", 00:14:38.705 "traddr": "10.0.0.1", 00:14:38.705 "trsvcid": "51726", 00:14:38.705 "trtype": "TCP" 00:14:38.705 }, 00:14:38.705 "qid": 0, 00:14:38.705 "state": "enabled", 00:14:38.705 "thread": "nvmf_tgt_poll_group_000" 00:14:38.705 } 00:14:38.705 ]' 00:14:38.705 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.965 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.965 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.965 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:38.965 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.965 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.965 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.965 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.224 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:39.224 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:39.792 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.792 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:39.792 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.792 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.792 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.792 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.792 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.792 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.053 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.312 00:14:40.313 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.313 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.313 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.570 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.570 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.570 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.570 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.828 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.828 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.828 { 00:14:40.828 "auth": { 00:14:40.828 "dhgroup": "ffdhe2048", 00:14:40.828 "digest": "sha512", 00:14:40.828 "state": "completed" 00:14:40.828 }, 00:14:40.828 "cntlid": 107, 00:14:40.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:40.828 "listen_address": { 00:14:40.828 "adrfam": "IPv4", 00:14:40.829 "traddr": "10.0.0.3", 00:14:40.829 "trsvcid": "4420", 00:14:40.829 "trtype": "TCP" 00:14:40.829 }, 00:14:40.829 "peer_address": { 00:14:40.829 "adrfam": "IPv4", 00:14:40.829 "traddr": "10.0.0.1", 00:14:40.829 "trsvcid": "51758", 00:14:40.829 "trtype": "TCP" 00:14:40.829 }, 00:14:40.829 "qid": 0, 00:14:40.829 "state": "enabled", 00:14:40.829 "thread": "nvmf_tgt_poll_group_000" 00:14:40.829 } 00:14:40.829 ]' 00:14:40.829 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.829 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.829 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.829 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.829 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.829 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.829 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.829 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.087 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:41.087 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:41.654 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.654 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:41.654 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.654 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.654 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.654 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.654 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:41.654 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.912 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.171 00:14:42.429 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.429 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.429 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.687 { 00:14:42.687 "auth": { 00:14:42.687 "dhgroup": "ffdhe2048", 00:14:42.687 "digest": "sha512", 00:14:42.687 "state": "completed" 00:14:42.687 }, 00:14:42.687 "cntlid": 109, 00:14:42.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:42.687 "listen_address": { 00:14:42.687 "adrfam": "IPv4", 00:14:42.687 "traddr": "10.0.0.3", 00:14:42.687 "trsvcid": "4420", 00:14:42.687 "trtype": "TCP" 00:14:42.687 }, 00:14:42.687 "peer_address": { 00:14:42.687 "adrfam": "IPv4", 00:14:42.687 "traddr": "10.0.0.1", 00:14:42.687 "trsvcid": "51800", 00:14:42.687 "trtype": "TCP" 00:14:42.687 }, 00:14:42.687 "qid": 0, 00:14:42.687 "state": "enabled", 00:14:42.687 "thread": "nvmf_tgt_poll_group_000" 00:14:42.687 } 00:14:42.687 ]' 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.687 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.687 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.687 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.687 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.687 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.687 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.945 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:42.945 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:43.513 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.513 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:43.513 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.513 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.513 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.513 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.513 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:43.513 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.773 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.341 00:14:44.341 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.341 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.341 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.341 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.341 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.341 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.341 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.599 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.599 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.599 { 00:14:44.599 "auth": { 00:14:44.599 "dhgroup": "ffdhe2048", 00:14:44.599 "digest": "sha512", 00:14:44.599 "state": "completed" 00:14:44.599 }, 00:14:44.599 "cntlid": 111, 00:14:44.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:44.599 "listen_address": { 00:14:44.599 "adrfam": "IPv4", 00:14:44.599 "traddr": "10.0.0.3", 00:14:44.599 "trsvcid": "4420", 00:14:44.599 "trtype": "TCP" 00:14:44.599 }, 00:14:44.599 "peer_address": { 00:14:44.599 "adrfam": "IPv4", 00:14:44.599 "traddr": "10.0.0.1", 00:14:44.599 "trsvcid": "51824", 00:14:44.599 "trtype": "TCP" 00:14:44.599 }, 00:14:44.599 "qid": 0, 00:14:44.599 "state": "enabled", 00:14:44.599 "thread": "nvmf_tgt_poll_group_000" 00:14:44.599 } 00:14:44.599 ]' 00:14:44.599 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.599 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.599 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.599 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:44.599 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.600 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.600 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.600 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.859 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:44.859 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.426 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.778 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.037 00:14:46.037 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.038 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.038 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.296 { 00:14:46.296 "auth": { 00:14:46.296 "dhgroup": "ffdhe3072", 00:14:46.296 "digest": "sha512", 00:14:46.296 "state": "completed" 00:14:46.296 }, 00:14:46.296 "cntlid": 113, 00:14:46.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:46.296 "listen_address": { 00:14:46.296 "adrfam": "IPv4", 00:14:46.296 "traddr": "10.0.0.3", 00:14:46.296 "trsvcid": "4420", 00:14:46.296 "trtype": "TCP" 00:14:46.296 }, 00:14:46.296 "peer_address": { 00:14:46.296 "adrfam": "IPv4", 00:14:46.296 "traddr": "10.0.0.1", 00:14:46.296 "trsvcid": "51842", 00:14:46.296 "trtype": "TCP" 00:14:46.296 }, 00:14:46.296 "qid": 0, 00:14:46.296 "state": "enabled", 00:14:46.296 "thread": "nvmf_tgt_poll_group_000" 00:14:46.296 } 00:14:46.296 ]' 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.296 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.555 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.555 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.555 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.814 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:46.814 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:47.383 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.383 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:47.383 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.383 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.383 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.383 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.383 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.383 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.644 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.903 00:14:47.903 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.903 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.903 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.163 { 00:14:48.163 "auth": { 00:14:48.163 "dhgroup": "ffdhe3072", 00:14:48.163 "digest": "sha512", 00:14:48.163 "state": "completed" 00:14:48.163 }, 00:14:48.163 "cntlid": 115, 00:14:48.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:48.163 "listen_address": { 00:14:48.163 "adrfam": "IPv4", 00:14:48.163 "traddr": "10.0.0.3", 00:14:48.163 "trsvcid": "4420", 00:14:48.163 "trtype": "TCP" 00:14:48.163 }, 00:14:48.163 "peer_address": { 00:14:48.163 "adrfam": "IPv4", 00:14:48.163 "traddr": "10.0.0.1", 00:14:48.163 "trsvcid": "56040", 00:14:48.163 "trtype": "TCP" 00:14:48.163 }, 00:14:48.163 "qid": 0, 00:14:48.163 "state": "enabled", 00:14:48.163 "thread": "nvmf_tgt_poll_group_000" 00:14:48.163 } 00:14:48.163 ]' 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.163 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.423 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:48.423 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.423 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.423 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.423 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.683 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:48.683 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:49.251 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.251 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:49.251 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.252 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.252 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.252 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.252 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.252 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.512 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.772 00:14:49.772 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.772 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.772 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.031 { 00:14:50.031 "auth": { 00:14:50.031 "dhgroup": "ffdhe3072", 00:14:50.031 "digest": "sha512", 00:14:50.031 "state": "completed" 00:14:50.031 }, 00:14:50.031 "cntlid": 117, 00:14:50.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:50.031 "listen_address": { 00:14:50.031 "adrfam": "IPv4", 00:14:50.031 "traddr": "10.0.0.3", 00:14:50.031 "trsvcid": "4420", 00:14:50.031 "trtype": "TCP" 00:14:50.031 }, 00:14:50.031 "peer_address": { 00:14:50.031 "adrfam": "IPv4", 00:14:50.031 "traddr": "10.0.0.1", 00:14:50.031 "trsvcid": "56068", 00:14:50.031 "trtype": "TCP" 00:14:50.031 }, 00:14:50.031 "qid": 0, 00:14:50.031 "state": "enabled", 00:14:50.031 "thread": "nvmf_tgt_poll_group_000" 00:14:50.031 } 00:14:50.031 ]' 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.031 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.290 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:50.290 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.290 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.290 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.290 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.549 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:50.550 15:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:51.118 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.118 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:51.119 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.119 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.119 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.119 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.119 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:51.119 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.379 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.638 00:14:51.638 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.638 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.638 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.898 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.898 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.898 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.898 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.898 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.898 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.898 { 00:14:51.898 "auth": { 00:14:51.898 "dhgroup": "ffdhe3072", 00:14:51.898 "digest": "sha512", 00:14:51.898 "state": "completed" 00:14:51.898 }, 00:14:51.898 "cntlid": 119, 00:14:51.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:51.898 "listen_address": { 00:14:51.898 "adrfam": "IPv4", 00:14:51.898 "traddr": "10.0.0.3", 00:14:51.898 "trsvcid": "4420", 00:14:51.898 "trtype": "TCP" 00:14:51.898 }, 00:14:51.898 "peer_address": { 00:14:51.898 "adrfam": "IPv4", 00:14:51.898 "traddr": "10.0.0.1", 00:14:51.898 "trsvcid": "56092", 00:14:51.898 "trtype": "TCP" 00:14:51.898 }, 00:14:51.898 "qid": 0, 00:14:51.898 "state": "enabled", 00:14:51.898 "thread": "nvmf_tgt_poll_group_000" 00:14:51.898 } 00:14:51.898 ]' 00:14:51.898 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.157 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.157 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.157 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:52.157 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.157 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.157 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.157 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.416 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:52.416 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:52.984 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.242 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.502 00:14:53.760 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.760 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.760 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.020 { 00:14:54.020 "auth": { 00:14:54.020 "dhgroup": "ffdhe4096", 00:14:54.020 "digest": "sha512", 00:14:54.020 "state": "completed" 00:14:54.020 }, 00:14:54.020 "cntlid": 121, 00:14:54.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:54.020 "listen_address": { 00:14:54.020 "adrfam": "IPv4", 00:14:54.020 "traddr": "10.0.0.3", 00:14:54.020 "trsvcid": "4420", 00:14:54.020 "trtype": "TCP" 00:14:54.020 }, 00:14:54.020 "peer_address": { 00:14:54.020 "adrfam": "IPv4", 00:14:54.020 "traddr": "10.0.0.1", 00:14:54.020 "trsvcid": "56126", 00:14:54.020 "trtype": "TCP" 00:14:54.020 }, 00:14:54.020 "qid": 0, 00:14:54.020 "state": "enabled", 00:14:54.020 "thread": "nvmf_tgt_poll_group_000" 00:14:54.020 } 00:14:54.020 ]' 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.020 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.279 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:54.279 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:14:54.847 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.847 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:54.847 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.847 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.847 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.847 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.847 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:54.847 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.106 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.675 00:14:55.675 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.675 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.675 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.935 { 00:14:55.935 "auth": { 00:14:55.935 "dhgroup": "ffdhe4096", 00:14:55.935 "digest": "sha512", 00:14:55.935 "state": "completed" 00:14:55.935 }, 00:14:55.935 "cntlid": 123, 00:14:55.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:55.935 "listen_address": { 00:14:55.935 "adrfam": "IPv4", 00:14:55.935 "traddr": "10.0.0.3", 00:14:55.935 "trsvcid": "4420", 00:14:55.935 "trtype": "TCP" 00:14:55.935 }, 00:14:55.935 "peer_address": { 00:14:55.935 "adrfam": "IPv4", 00:14:55.935 "traddr": "10.0.0.1", 00:14:55.935 "trsvcid": "56154", 00:14:55.935 "trtype": "TCP" 00:14:55.935 }, 00:14:55.935 "qid": 0, 00:14:55.935 "state": "enabled", 00:14:55.935 "thread": "nvmf_tgt_poll_group_000" 00:14:55.935 } 00:14:55.935 ]' 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.935 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.194 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:56.194 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:14:56.790 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.790 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:56.790 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.790 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.790 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.790 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.790 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:56.790 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:57.071 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.072 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.640 00:14:57.640 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.640 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.640 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.640 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.640 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.640 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.640 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.899 { 00:14:57.899 "auth": { 00:14:57.899 "dhgroup": "ffdhe4096", 00:14:57.899 "digest": "sha512", 00:14:57.899 "state": "completed" 00:14:57.899 }, 00:14:57.899 "cntlid": 125, 00:14:57.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:57.899 "listen_address": { 00:14:57.899 "adrfam": "IPv4", 00:14:57.899 "traddr": "10.0.0.3", 00:14:57.899 "trsvcid": "4420", 00:14:57.899 "trtype": "TCP" 00:14:57.899 }, 00:14:57.899 "peer_address": { 00:14:57.899 "adrfam": "IPv4", 00:14:57.899 "traddr": "10.0.0.1", 00:14:57.899 "trsvcid": "55532", 00:14:57.899 "trtype": "TCP" 00:14:57.899 }, 00:14:57.899 "qid": 0, 00:14:57.899 "state": "enabled", 00:14:57.899 "thread": "nvmf_tgt_poll_group_000" 00:14:57.899 } 00:14:57.899 ]' 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.899 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.158 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:58.158 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:14:58.727 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.727 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:14:58.727 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.727 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.727 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.727 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.727 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:58.727 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.987 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.556 00:14:59.556 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.556 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.556 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.815 { 00:14:59.815 "auth": { 00:14:59.815 "dhgroup": "ffdhe4096", 00:14:59.815 "digest": "sha512", 00:14:59.815 "state": "completed" 00:14:59.815 }, 00:14:59.815 "cntlid": 127, 00:14:59.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:14:59.815 "listen_address": { 00:14:59.815 "adrfam": "IPv4", 00:14:59.815 "traddr": "10.0.0.3", 00:14:59.815 "trsvcid": "4420", 00:14:59.815 "trtype": "TCP" 00:14:59.815 }, 00:14:59.815 "peer_address": { 00:14:59.815 "adrfam": "IPv4", 00:14:59.815 "traddr": "10.0.0.1", 00:14:59.815 "trsvcid": "55556", 00:14:59.815 "trtype": "TCP" 00:14:59.815 }, 00:14:59.815 "qid": 0, 00:14:59.815 "state": "enabled", 00:14:59.815 "thread": "nvmf_tgt_poll_group_000" 00:14:59.815 } 00:14:59.815 ]' 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.074 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:00.074 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.643 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.902 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.470 00:15:01.470 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.470 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.470 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.729 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.729 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.729 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.729 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.729 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.729 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.729 { 00:15:01.729 "auth": { 00:15:01.729 "dhgroup": "ffdhe6144", 00:15:01.729 "digest": "sha512", 00:15:01.729 "state": "completed" 00:15:01.729 }, 00:15:01.729 "cntlid": 129, 00:15:01.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:01.729 "listen_address": { 00:15:01.729 "adrfam": "IPv4", 00:15:01.729 "traddr": "10.0.0.3", 00:15:01.729 "trsvcid": "4420", 00:15:01.729 "trtype": "TCP" 00:15:01.729 }, 00:15:01.729 "peer_address": { 00:15:01.729 "adrfam": "IPv4", 00:15:01.729 "traddr": "10.0.0.1", 00:15:01.729 "trsvcid": "55594", 00:15:01.729 "trtype": "TCP" 00:15:01.729 }, 00:15:01.729 "qid": 0, 00:15:01.729 "state": "enabled", 00:15:01.730 "thread": "nvmf_tgt_poll_group_000" 00:15:01.730 } 00:15:01.730 ]' 00:15:01.730 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.730 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.730 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.730 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.730 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.988 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.988 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.988 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.988 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:15:01.988 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.944 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.511 00:15:03.511 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.511 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.511 15:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.770 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.770 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.770 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.770 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.770 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.771 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.771 { 00:15:03.771 "auth": { 00:15:03.771 "dhgroup": "ffdhe6144", 00:15:03.771 "digest": "sha512", 00:15:03.771 "state": "completed" 00:15:03.771 }, 00:15:03.771 "cntlid": 131, 00:15:03.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:03.771 "listen_address": { 00:15:03.771 "adrfam": "IPv4", 00:15:03.771 "traddr": "10.0.0.3", 00:15:03.771 "trsvcid": "4420", 00:15:03.771 "trtype": "TCP" 00:15:03.771 }, 00:15:03.771 "peer_address": { 00:15:03.771 "adrfam": "IPv4", 00:15:03.771 "traddr": "10.0.0.1", 00:15:03.771 "trsvcid": "55616", 00:15:03.771 "trtype": "TCP" 00:15:03.771 }, 00:15:03.771 "qid": 0, 00:15:03.771 "state": "enabled", 00:15:03.771 "thread": "nvmf_tgt_poll_group_000" 00:15:03.771 } 00:15:03.771 ]' 00:15:03.771 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.771 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.771 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.771 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:03.771 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.030 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.030 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.030 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.290 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:15:04.290 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:15:04.857 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.857 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:04.857 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.857 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.857 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.857 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.857 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:04.857 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.117 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.375 00:15:05.634 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.634 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.635 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.894 { 00:15:05.894 "auth": { 00:15:05.894 "dhgroup": "ffdhe6144", 00:15:05.894 "digest": "sha512", 00:15:05.894 "state": "completed" 00:15:05.894 }, 00:15:05.894 "cntlid": 133, 00:15:05.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:05.894 "listen_address": { 00:15:05.894 "adrfam": "IPv4", 00:15:05.894 "traddr": "10.0.0.3", 00:15:05.894 "trsvcid": "4420", 00:15:05.894 "trtype": "TCP" 00:15:05.894 }, 00:15:05.894 "peer_address": { 00:15:05.894 "adrfam": "IPv4", 00:15:05.894 "traddr": "10.0.0.1", 00:15:05.894 "trsvcid": "55656", 00:15:05.894 "trtype": "TCP" 00:15:05.894 }, 00:15:05.894 "qid": 0, 00:15:05.894 "state": "enabled", 00:15:05.894 "thread": "nvmf_tgt_poll_group_000" 00:15:05.894 } 00:15:05.894 ]' 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.894 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.153 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:15:06.153 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:15:06.721 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.721 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:06.721 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.721 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.721 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.721 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.721 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.721 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.981 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.554 00:15:07.554 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.554 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.554 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.822 { 00:15:07.822 "auth": { 00:15:07.822 "dhgroup": "ffdhe6144", 00:15:07.822 "digest": "sha512", 00:15:07.822 "state": "completed" 00:15:07.822 }, 00:15:07.822 "cntlid": 135, 00:15:07.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:07.822 "listen_address": { 00:15:07.822 "adrfam": "IPv4", 00:15:07.822 "traddr": "10.0.0.3", 00:15:07.822 "trsvcid": "4420", 00:15:07.822 "trtype": "TCP" 00:15:07.822 }, 00:15:07.822 "peer_address": { 00:15:07.822 "adrfam": "IPv4", 00:15:07.822 "traddr": "10.0.0.1", 00:15:07.822 "trsvcid": "47736", 00:15:07.822 "trtype": "TCP" 00:15:07.822 }, 00:15:07.822 "qid": 0, 00:15:07.822 "state": "enabled", 00:15:07.822 "thread": "nvmf_tgt_poll_group_000" 00:15:07.822 } 00:15:07.822 ]' 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.822 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.082 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.082 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.082 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.341 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:08.342 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:08.910 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.169 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.737 00:15:09.737 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.737 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.737 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.997 { 00:15:09.997 "auth": { 00:15:09.997 "dhgroup": "ffdhe8192", 00:15:09.997 "digest": "sha512", 00:15:09.997 "state": "completed" 00:15:09.997 }, 00:15:09.997 "cntlid": 137, 00:15:09.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:09.997 "listen_address": { 00:15:09.997 "adrfam": "IPv4", 00:15:09.997 "traddr": "10.0.0.3", 00:15:09.997 "trsvcid": "4420", 00:15:09.997 "trtype": "TCP" 00:15:09.997 }, 00:15:09.997 "peer_address": { 00:15:09.997 "adrfam": "IPv4", 00:15:09.997 "traddr": "10.0.0.1", 00:15:09.997 "trsvcid": "47762", 00:15:09.997 "trtype": "TCP" 00:15:09.997 }, 00:15:09.997 "qid": 0, 00:15:09.997 "state": "enabled", 00:15:09.997 "thread": "nvmf_tgt_poll_group_000" 00:15:09.997 } 00:15:09.997 ]' 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.997 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.256 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:15:10.256 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:15:10.823 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.823 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:10.823 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.823 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.823 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.823 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.823 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.823 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.082 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.649 00:15:11.649 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.649 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.649 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.906 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.906 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.907 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.907 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.907 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.907 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.907 { 00:15:11.907 "auth": { 00:15:11.907 "dhgroup": "ffdhe8192", 00:15:11.907 "digest": "sha512", 00:15:11.907 "state": "completed" 00:15:11.907 }, 00:15:11.907 "cntlid": 139, 00:15:11.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:11.907 "listen_address": { 00:15:11.907 "adrfam": "IPv4", 00:15:11.907 "traddr": "10.0.0.3", 00:15:11.907 "trsvcid": "4420", 00:15:11.907 "trtype": "TCP" 00:15:11.907 }, 00:15:11.907 "peer_address": { 00:15:11.907 "adrfam": "IPv4", 00:15:11.907 "traddr": "10.0.0.1", 00:15:11.907 "trsvcid": "47802", 00:15:11.907 "trtype": "TCP" 00:15:11.907 }, 00:15:11.907 "qid": 0, 00:15:11.907 "state": "enabled", 00:15:11.907 "thread": "nvmf_tgt_poll_group_000" 00:15:11.907 } 00:15:11.907 ]' 00:15:11.907 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.907 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.907 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.165 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:12.165 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.165 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.165 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.165 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.423 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:15:12.423 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: --dhchap-ctrl-secret DHHC-1:02:OTM3MzM3ZGY3NjhhYmRiNzQxNTI2NWJkMTVhZDMwOTE1NzBkNjFjNzQ1YjVlYmQ341MLXQ==: 00:15:13.001 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.001 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:13.001 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.001 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.001 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.001 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.001 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.001 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.282 15:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.851 00:15:13.851 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.851 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.851 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.111 { 00:15:14.111 "auth": { 00:15:14.111 "dhgroup": "ffdhe8192", 00:15:14.111 "digest": "sha512", 00:15:14.111 "state": "completed" 00:15:14.111 }, 00:15:14.111 "cntlid": 141, 00:15:14.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:14.111 "listen_address": { 00:15:14.111 "adrfam": "IPv4", 00:15:14.111 "traddr": "10.0.0.3", 00:15:14.111 "trsvcid": "4420", 00:15:14.111 "trtype": "TCP" 00:15:14.111 }, 00:15:14.111 "peer_address": { 00:15:14.111 "adrfam": "IPv4", 00:15:14.111 "traddr": "10.0.0.1", 00:15:14.111 "trsvcid": "47830", 00:15:14.111 "trtype": "TCP" 00:15:14.111 }, 00:15:14.111 "qid": 0, 00:15:14.111 "state": "enabled", 00:15:14.111 "thread": "nvmf_tgt_poll_group_000" 00:15:14.111 } 00:15:14.111 ]' 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.111 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.371 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:15:14.371 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:01:Zjc4MjA4ODIyMWVjMzkyOThjZDRlMzkxMjdiMzJlMWUo8NRs: 00:15:14.941 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.941 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:14.941 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.941 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.941 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.941 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.941 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:14.941 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.200 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.769 00:15:15.769 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.769 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.769 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.028 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.028 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.028 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.028 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.028 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.028 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.028 { 00:15:16.028 "auth": { 00:15:16.028 "dhgroup": "ffdhe8192", 00:15:16.028 "digest": "sha512", 00:15:16.028 "state": "completed" 00:15:16.028 }, 00:15:16.028 "cntlid": 143, 00:15:16.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:16.028 "listen_address": { 00:15:16.028 "adrfam": "IPv4", 00:15:16.028 "traddr": "10.0.0.3", 00:15:16.028 "trsvcid": "4420", 00:15:16.028 "trtype": "TCP" 00:15:16.028 }, 00:15:16.028 "peer_address": { 00:15:16.028 "adrfam": "IPv4", 00:15:16.028 "traddr": "10.0.0.1", 00:15:16.028 "trsvcid": "47852", 00:15:16.028 "trtype": "TCP" 00:15:16.028 }, 00:15:16.028 "qid": 0, 00:15:16.028 "state": "enabled", 00:15:16.028 "thread": "nvmf_tgt_poll_group_000" 00:15:16.028 } 00:15:16.028 ]' 00:15:16.028 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.288 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.288 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.288 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.288 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.288 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.288 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.288 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.548 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:16.548 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.116 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.376 15:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.944 00:15:17.944 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.944 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.944 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.204 { 00:15:18.204 "auth": { 00:15:18.204 "dhgroup": "ffdhe8192", 00:15:18.204 "digest": "sha512", 00:15:18.204 "state": "completed" 00:15:18.204 }, 00:15:18.204 "cntlid": 145, 00:15:18.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:18.204 "listen_address": { 00:15:18.204 "adrfam": "IPv4", 00:15:18.204 "traddr": "10.0.0.3", 00:15:18.204 "trsvcid": "4420", 00:15:18.204 "trtype": "TCP" 00:15:18.204 }, 00:15:18.204 "peer_address": { 00:15:18.204 "adrfam": "IPv4", 00:15:18.204 "traddr": "10.0.0.1", 00:15:18.204 "trsvcid": "46210", 00:15:18.204 "trtype": "TCP" 00:15:18.204 }, 00:15:18.204 "qid": 0, 00:15:18.204 "state": "enabled", 00:15:18.204 "thread": "nvmf_tgt_poll_group_000" 00:15:18.204 } 00:15:18.204 ]' 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.204 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.463 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.463 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.463 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.723 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:15:18.723 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:00:OWZhMDkwNWEzOGRmZWE4NzMwZDA2ZWQxNjAzNzc0YTQ5NWI3MjkxN2RkZWQ5N2IzfuJrtg==: --dhchap-ctrl-secret DHHC-1:03:NDM5NmVlZTM3MjY0YjYxZTgyMDQxMTEwOWIyOTJhZWYzNjJlOGNjYTI3MmU2NThhY2Y0NDQ2YzUwNjNkNWQxYVTk3IU=: 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:19.303 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:19.872 2024/11/20 15:23:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:19.872 request: 00:15:19.872 { 00:15:19.872 "method": "bdev_nvme_attach_controller", 00:15:19.872 "params": { 00:15:19.872 "name": "nvme0", 00:15:19.872 "trtype": "tcp", 00:15:19.872 "traddr": "10.0.0.3", 00:15:19.872 "adrfam": "ipv4", 00:15:19.872 "trsvcid": "4420", 00:15:19.872 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:19.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:19.872 "prchk_reftag": false, 00:15:19.872 "prchk_guard": false, 00:15:19.872 "hdgst": false, 00:15:19.872 "ddgst": false, 00:15:19.872 "dhchap_key": "key2", 00:15:19.872 "allow_unrecognized_csi": false 00:15:19.872 } 00:15:19.872 } 00:15:19.872 Got JSON-RPC error response 00:15:19.872 GoRPCClient: error on JSON-RPC call 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.872 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.441 2024/11/20 15:23:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:20.441 request: 00:15:20.441 { 00:15:20.441 "method": "bdev_nvme_attach_controller", 00:15:20.441 "params": { 00:15:20.441 "name": "nvme0", 00:15:20.441 "trtype": "tcp", 00:15:20.441 "traddr": "10.0.0.3", 00:15:20.441 "adrfam": "ipv4", 00:15:20.441 "trsvcid": "4420", 00:15:20.441 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:20.441 "prchk_reftag": false, 00:15:20.441 "prchk_guard": false, 00:15:20.441 "hdgst": false, 00:15:20.441 "ddgst": false, 00:15:20.441 "dhchap_key": "key1", 00:15:20.441 "dhchap_ctrlr_key": "ckey2", 00:15:20.441 "allow_unrecognized_csi": false 00:15:20.441 } 00:15:20.441 } 00:15:20.441 Got JSON-RPC error response 00:15:20.441 GoRPCClient: error on JSON-RPC call 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.441 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.442 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.010 2024/11/20 15:23:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:21.010 request: 00:15:21.010 { 00:15:21.010 "method": "bdev_nvme_attach_controller", 00:15:21.010 "params": { 00:15:21.010 "name": "nvme0", 00:15:21.010 "trtype": "tcp", 00:15:21.010 "traddr": "10.0.0.3", 00:15:21.010 "adrfam": "ipv4", 00:15:21.010 "trsvcid": "4420", 00:15:21.010 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:21.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:21.010 "prchk_reftag": false, 00:15:21.010 "prchk_guard": false, 00:15:21.010 "hdgst": false, 00:15:21.010 "ddgst": false, 00:15:21.010 "dhchap_key": "key1", 00:15:21.010 "dhchap_ctrlr_key": "ckey1", 00:15:21.010 "allow_unrecognized_csi": false 00:15:21.010 } 00:15:21.010 } 00:15:21.010 Got JSON-RPC error response 00:15:21.010 GoRPCClient: error on JSON-RPC call 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 77118 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 77118 ']' 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 77118 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77118 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.010 killing process with pid 77118 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77118' 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 77118 00:15:21.010 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 77118 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81897 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81897 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81897 ']' 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.270 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.207 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.207 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:22.207 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:22.207 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:22.207 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81897 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81897 ']' 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.466 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.726 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.726 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:22.726 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:22.726 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.726 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.726 null0 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kXq 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.5bT ]] 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5bT 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kNW 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.4hF ]] 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4hF 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.726 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.E8N 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Z6a ]] 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z6a 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GUo 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.985 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.920 nvme0n1 00:15:23.920 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.920 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.920 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.178 { 00:15:24.178 "auth": { 00:15:24.178 "dhgroup": "ffdhe8192", 00:15:24.178 "digest": "sha512", 00:15:24.178 "state": "completed" 00:15:24.178 }, 00:15:24.178 "cntlid": 1, 00:15:24.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:24.178 "listen_address": { 00:15:24.178 "adrfam": "IPv4", 00:15:24.178 "traddr": "10.0.0.3", 00:15:24.178 "trsvcid": "4420", 00:15:24.178 "trtype": "TCP" 00:15:24.178 }, 00:15:24.178 "peer_address": { 00:15:24.178 "adrfam": "IPv4", 00:15:24.178 "traddr": "10.0.0.1", 00:15:24.178 "trsvcid": "46278", 00:15:24.178 "trtype": "TCP" 00:15:24.178 }, 00:15:24.178 "qid": 0, 00:15:24.178 "state": "enabled", 00:15:24.178 "thread": "nvmf_tgt_poll_group_000" 00:15:24.178 } 00:15:24.178 ]' 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.178 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.437 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:24.437 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key3 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:25.006 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.264 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.522 2024/11/20 15:23:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:25.522 request: 00:15:25.522 { 00:15:25.522 "method": "bdev_nvme_attach_controller", 00:15:25.522 "params": { 00:15:25.522 "name": "nvme0", 00:15:25.522 "trtype": "tcp", 00:15:25.522 "traddr": "10.0.0.3", 00:15:25.522 "adrfam": "ipv4", 00:15:25.522 "trsvcid": "4420", 00:15:25.522 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:25.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:25.522 "prchk_reftag": false, 00:15:25.522 "prchk_guard": false, 00:15:25.522 "hdgst": false, 00:15:25.522 "ddgst": false, 00:15:25.522 "dhchap_key": "key3", 00:15:25.522 "allow_unrecognized_csi": false 00:15:25.522 } 00:15:25.522 } 00:15:25.522 Got JSON-RPC error response 00:15:25.522 GoRPCClient: error on JSON-RPC call 00:15:25.522 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:25.522 15:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:25.522 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:25.522 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.781 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.040 2024/11/20 15:23:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:26.040 request: 00:15:26.040 { 00:15:26.040 "method": "bdev_nvme_attach_controller", 00:15:26.040 "params": { 00:15:26.040 "name": "nvme0", 00:15:26.040 "trtype": "tcp", 00:15:26.040 "traddr": "10.0.0.3", 00:15:26.040 "adrfam": "ipv4", 00:15:26.040 "trsvcid": "4420", 00:15:26.040 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:26.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:26.040 "prchk_reftag": false, 00:15:26.040 "prchk_guard": false, 00:15:26.040 "hdgst": false, 00:15:26.040 "ddgst": false, 00:15:26.040 "dhchap_key": "key3", 00:15:26.040 "allow_unrecognized_csi": false 00:15:26.040 } 00:15:26.040 } 00:15:26.040 Got JSON-RPC error response 00:15:26.040 GoRPCClient: error on JSON-RPC call 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:26.299 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.557 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.558 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.817 2024/11/20 15:23:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:26.817 request: 00:15:26.817 { 00:15:26.817 "method": "bdev_nvme_attach_controller", 00:15:26.817 "params": { 00:15:26.817 "name": "nvme0", 00:15:26.817 "trtype": "tcp", 00:15:26.817 "traddr": "10.0.0.3", 00:15:26.817 "adrfam": "ipv4", 00:15:26.817 "trsvcid": "4420", 00:15:26.817 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:26.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:26.817 "prchk_reftag": false, 00:15:26.817 "prchk_guard": false, 00:15:26.817 "hdgst": false, 00:15:26.817 "ddgst": false, 00:15:26.817 "dhchap_key": "key0", 00:15:26.817 "dhchap_ctrlr_key": "key1", 00:15:26.817 "allow_unrecognized_csi": false 00:15:26.817 } 00:15:26.817 } 00:15:26.817 Got JSON-RPC error response 00:15:26.817 GoRPCClient: error on JSON-RPC call 00:15:26.817 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:26.818 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.818 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.818 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.818 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:26.818 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:26.818 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:27.077 nvme0n1 00:15:27.336 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:27.336 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.336 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:27.336 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.336 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.336 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.594 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 00:15:27.594 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.595 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.595 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.595 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:27.595 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:27.595 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:28.531 nvme0n1 00:15:28.531 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:28.531 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:28.531 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.790 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.790 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:28.790 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.790 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.790 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.790 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:28.790 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:28.790 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.051 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.051 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:29.051 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid 911e0f85-f038-4b18-8714-1526ee6c3105 -l 0 --dhchap-secret DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: --dhchap-ctrl-secret DHHC-1:03:MGY2Yzc5Y2EyOWYzOTM3MmUwNmE0ZjFjYjY1ZjNkNzI3NjRkNzk1MDY4ZDNlYjc4NmFhYmNkNzZiMmM1YzRhN0lYATc=: 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.626 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:29.885 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:30.453 2024/11/20 15:23:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:30.453 request: 00:15:30.453 { 00:15:30.453 "method": "bdev_nvme_attach_controller", 00:15:30.453 "params": { 00:15:30.453 "name": "nvme0", 00:15:30.453 "trtype": "tcp", 00:15:30.453 "traddr": "10.0.0.3", 00:15:30.453 "adrfam": "ipv4", 00:15:30.453 "trsvcid": "4420", 00:15:30.453 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:30.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105", 00:15:30.453 "prchk_reftag": false, 00:15:30.453 "prchk_guard": false, 00:15:30.453 "hdgst": false, 00:15:30.453 "ddgst": false, 00:15:30.453 "dhchap_key": "key1", 00:15:30.453 "allow_unrecognized_csi": false 00:15:30.453 } 00:15:30.453 } 00:15:30.453 Got JSON-RPC error response 00:15:30.453 GoRPCClient: error on JSON-RPC call 00:15:30.453 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:30.453 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:30.453 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:30.453 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:30.453 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:30.453 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:30.454 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:31.389 nvme0n1 00:15:31.389 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:31.389 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.389 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:31.648 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.648 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.648 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.907 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:31.907 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.907 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.907 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.907 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:31.907 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:31.907 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:32.165 nvme0n1 00:15:32.165 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:32.165 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:32.165 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.422 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.423 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.423 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: '' 2s 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: ]] 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWIwNmY3MjgxNGUwMzM2ZDk4ZWMzZDQ5OGQ0MGFkYTbh44OM: 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:32.681 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: 2s 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: ]] 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmM2MjQ2ZDUyYWU0MWU1MDc3M2NjOGQ0Mjc1N2RjNWI0YjQwY2E2NDQyNTRjNmQyFPhfNw==: 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:35.216 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:37.121 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:37.689 nvme0n1 00:15:37.689 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:37.689 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.689 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.689 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.689 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:37.689 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:38.256 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:38.256 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:38.256 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.515 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.515 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:38.515 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.515 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.515 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.515 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:38.515 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:38.774 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:38.774 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:38.774 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:39.032 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:39.600 2024/11/20 15:23:25 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:39.600 request: 00:15:39.600 { 00:15:39.600 "method": "bdev_nvme_set_keys", 00:15:39.600 "params": { 00:15:39.600 "name": "nvme0", 00:15:39.600 "dhchap_key": "key1", 00:15:39.600 "dhchap_ctrlr_key": "key3" 00:15:39.600 } 00:15:39.600 } 00:15:39.600 Got JSON-RPC error response 00:15:39.600 GoRPCClient: error on JSON-RPC call 00:15:39.600 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:39.600 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:39.600 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:39.600 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:39.600 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:39.600 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.600 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:39.858 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:39.858 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:40.882 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:40.882 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.882 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:41.140 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:41.140 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:41.140 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.140 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.140 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.140 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.140 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.140 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:42.079 nvme0n1 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:42.079 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:42.646 2024/11/20 15:23:28 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:42.646 request: 00:15:42.646 { 00:15:42.646 "method": "bdev_nvme_set_keys", 00:15:42.646 "params": { 00:15:42.646 "name": "nvme0", 00:15:42.646 "dhchap_key": "key2", 00:15:42.646 "dhchap_ctrlr_key": "key0" 00:15:42.646 } 00:15:42.646 } 00:15:42.646 Got JSON-RPC error response 00:15:42.646 GoRPCClient: error on JSON-RPC call 00:15:42.646 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:42.646 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.646 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.646 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.646 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:42.646 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:42.646 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.913 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:42.913 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:43.852 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:43.852 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:43.852 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.111 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:44.111 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:44.111 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:44.111 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77159 00:15:44.111 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 77159 ']' 00:15:44.111 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 77159 00:15:44.370 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:44.370 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.370 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77159 00:15:44.370 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:44.370 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:44.370 killing process with pid 77159 00:15:44.370 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77159' 00:15:44.370 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 77159 00:15:44.370 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 77159 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:44.939 rmmod nvme_tcp 00:15:44.939 rmmod nvme_fabrics 00:15:44.939 rmmod nvme_keyring 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81897 ']' 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81897 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81897 ']' 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81897 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81897 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.939 killing process with pid 81897 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81897' 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81897 00:15:44.939 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81897 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:45.198 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.456 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.715 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:45.715 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kXq /tmp/spdk.key-sha256.kNW /tmp/spdk.key-sha384.E8N /tmp/spdk.key-sha512.GUo /tmp/spdk.key-sha512.5bT /tmp/spdk.key-sha384.4hF /tmp/spdk.key-sha256.Z6a '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:45.715 00:15:45.715 real 2m56.679s 00:15:45.715 user 6m53.239s 00:15:45.715 sys 0m35.845s 00:15:45.715 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.715 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.715 ************************************ 00:15:45.715 END TEST nvmf_auth_target 00:15:45.715 ************************************ 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.715 ************************************ 00:15:45.715 START TEST nvmf_bdevio_no_huge 00:15:45.715 ************************************ 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:45.715 * Looking for test storage... 00:15:45.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:45.715 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:45.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.975 --rc genhtml_branch_coverage=1 00:15:45.975 --rc genhtml_function_coverage=1 00:15:45.975 --rc genhtml_legend=1 00:15:45.975 --rc geninfo_all_blocks=1 00:15:45.975 --rc geninfo_unexecuted_blocks=1 00:15:45.975 00:15:45.975 ' 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:45.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.975 --rc genhtml_branch_coverage=1 00:15:45.975 --rc genhtml_function_coverage=1 00:15:45.975 --rc genhtml_legend=1 00:15:45.975 --rc geninfo_all_blocks=1 00:15:45.975 --rc geninfo_unexecuted_blocks=1 00:15:45.975 00:15:45.975 ' 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:45.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.975 --rc genhtml_branch_coverage=1 00:15:45.975 --rc genhtml_function_coverage=1 00:15:45.975 --rc genhtml_legend=1 00:15:45.975 --rc geninfo_all_blocks=1 00:15:45.975 --rc geninfo_unexecuted_blocks=1 00:15:45.975 00:15:45.975 ' 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:45.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.975 --rc genhtml_branch_coverage=1 00:15:45.975 --rc genhtml_function_coverage=1 00:15:45.975 --rc genhtml_legend=1 00:15:45.975 --rc geninfo_all_blocks=1 00:15:45.975 --rc geninfo_unexecuted_blocks=1 00:15:45.975 00:15:45.975 ' 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.975 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:45.976 Cannot find device "nvmf_init_br" 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:45.976 Cannot find device "nvmf_init_br2" 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:45.976 Cannot find device "nvmf_tgt_br" 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.976 Cannot find device "nvmf_tgt_br2" 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:45.976 Cannot find device "nvmf_init_br" 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:45.976 Cannot find device "nvmf_init_br2" 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:45.976 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:46.260 Cannot find device "nvmf_tgt_br" 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:46.260 Cannot find device "nvmf_tgt_br2" 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:46.260 Cannot find device "nvmf_br" 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:46.260 Cannot find device "nvmf_init_if" 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:46.260 Cannot find device "nvmf_init_if2" 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.260 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.261 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.261 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:46.261 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:46.261 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:46.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:15:46.542 00:15:46.542 --- 10.0.0.3 ping statistics --- 00:15:46.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.542 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:46.542 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:46.542 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 00:15:46.542 00:15:46.542 --- 10.0.0.4 ping statistics --- 00:15:46.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.542 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:15:46.542 00:15:46.542 --- 10.0.0.1 ping statistics --- 00:15:46.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.542 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:46.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:15:46.542 00:15:46.542 --- 10.0.0.2 ping statistics --- 00:15:46.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.542 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82759 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82759 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82759 ']' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.542 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.542 [2024-11-20 15:23:32.934795] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:15:46.542 [2024-11-20 15:23:32.934899] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:46.802 [2024-11-20 15:23:33.100978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.802 [2024-11-20 15:23:33.177422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.802 [2024-11-20 15:23:33.177524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.802 [2024-11-20 15:23:33.177535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.802 [2024-11-20 15:23:33.177544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.802 [2024-11-20 15:23:33.177551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.802 [2024-11-20 15:23:33.178217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:46.802 [2024-11-20 15:23:33.178475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:46.802 [2024-11-20 15:23:33.178926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:46.802 [2024-11-20 15:23:33.178927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.738 [2024-11-20 15:23:33.931022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.738 Malloc0 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.738 [2024-11-20 15:23:33.979408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:47.738 { 00:15:47.738 "params": { 00:15:47.738 "name": "Nvme$subsystem", 00:15:47.738 "trtype": "$TEST_TRANSPORT", 00:15:47.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:47.738 "adrfam": "ipv4", 00:15:47.738 "trsvcid": "$NVMF_PORT", 00:15:47.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:47.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:47.738 "hdgst": ${hdgst:-false}, 00:15:47.738 "ddgst": ${ddgst:-false} 00:15:47.738 }, 00:15:47.738 "method": "bdev_nvme_attach_controller" 00:15:47.738 } 00:15:47.738 EOF 00:15:47.738 )") 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:47.738 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:47.738 "params": { 00:15:47.738 "name": "Nvme1", 00:15:47.738 "trtype": "tcp", 00:15:47.738 "traddr": "10.0.0.3", 00:15:47.738 "adrfam": "ipv4", 00:15:47.738 "trsvcid": "4420", 00:15:47.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:47.738 "hdgst": false, 00:15:47.738 "ddgst": false 00:15:47.738 }, 00:15:47.738 "method": "bdev_nvme_attach_controller" 00:15:47.738 }' 00:15:47.738 [2024-11-20 15:23:34.044217] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:15:47.739 [2024-11-20 15:23:34.044308] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82815 ] 00:15:47.739 [2024-11-20 15:23:34.206356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:47.997 [2024-11-20 15:23:34.281523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.997 [2024-11-20 15:23:34.281609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.997 [2024-11-20 15:23:34.281614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.256 I/O targets: 00:15:48.256 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:48.256 00:15:48.256 00:15:48.256 CUnit - A unit testing framework for C - Version 2.1-3 00:15:48.256 http://cunit.sourceforge.net/ 00:15:48.256 00:15:48.256 00:15:48.256 Suite: bdevio tests on: Nvme1n1 00:15:48.256 Test: blockdev write read block ...passed 00:15:48.256 Test: blockdev write zeroes read block ...passed 00:15:48.256 Test: blockdev write zeroes read no split ...passed 00:15:48.256 Test: blockdev write zeroes read split ...passed 00:15:48.256 Test: blockdev write zeroes read split partial ...passed 00:15:48.256 Test: blockdev reset ...[2024-11-20 15:23:34.664487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:48.256 [2024-11-20 15:23:34.664624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21db380 (9): Bad file descriptor 00:15:48.256 [2024-11-20 15:23:34.684432] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:48.256 passed 00:15:48.256 Test: blockdev write read 8 blocks ...passed 00:15:48.256 Test: blockdev write read size > 128k ...passed 00:15:48.256 Test: blockdev write read invalid size ...passed 00:15:48.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.256 Test: blockdev write read max offset ...passed 00:15:48.515 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.515 Test: blockdev writev readv 8 blocks ...passed 00:15:48.515 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.515 Test: blockdev writev readv block ...passed 00:15:48.515 Test: blockdev writev readv size > 128k ...passed 00:15:48.515 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.515 Test: blockdev comparev and writev ...[2024-11-20 15:23:34.859019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.515 [2024-11-20 15:23:34.859092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:48.515 [2024-11-20 15:23:34.859110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.515 [2024-11-20 15:23:34.859121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:48.515 [2024-11-20 15:23:34.859539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.515 [2024-11-20 15:23:34.859558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:48.515 [2024-11-20 15:23:34.859573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.515 [2024-11-20 15:23:34.859583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:48.516 [2024-11-20 15:23:34.859898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.516 [2024-11-20 15:23:34.859919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:48.516 [2024-11-20 15:23:34.859933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.516 [2024-11-20 15:23:34.859943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:48.516 [2024-11-20 15:23:34.860236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.516 [2024-11-20 15:23:34.860257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:48.516 [2024-11-20 15:23:34.860272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.516 [2024-11-20 15:23:34.860281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:48.516 passed 00:15:48.516 Test: blockdev nvme passthru rw ...passed 00:15:48.516 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:23:34.943065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.516 [2024-11-20 15:23:34.943114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:48.516 [2024-11-20 15:23:34.943221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.516 [2024-11-20 15:23:34.943234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:48.516 [2024-11-20 15:23:34.943335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.516 [2024-11-20 15:23:34.943346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:48.516 [2024-11-20 15:23:34.943426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.516 [2024-11-20 15:23:34.943437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:48.516 passed 00:15:48.516 Test: blockdev nvme admin passthru ...passed 00:15:48.775 Test: blockdev copy ...passed 00:15:48.775 00:15:48.775 Run Summary: Type Total Ran Passed Failed Inactive 00:15:48.775 suites 1 1 n/a 0 0 00:15:48.775 tests 23 23 23 0 0 00:15:48.775 asserts 152 152 152 0 n/a 00:15:48.775 00:15:48.775 Elapsed time = 0.997 seconds 00:15:49.034 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.034 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.034 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:49.292 rmmod nvme_tcp 00:15:49.292 rmmod nvme_fabrics 00:15:49.292 rmmod nvme_keyring 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82759 ']' 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82759 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82759 ']' 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82759 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82759 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82759' 00:15:49.292 killing process with pid 82759 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82759 00:15:49.292 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82759 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:49.861 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:50.119 00:15:50.119 real 0m4.514s 00:15:50.119 user 0m14.298s 00:15:50.119 sys 0m2.042s 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.119 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.119 ************************************ 00:15:50.119 END TEST nvmf_bdevio_no_huge 00:15:50.119 ************************************ 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.378 ************************************ 00:15:50.378 START TEST nvmf_tls 00:15:50.378 ************************************ 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:50.378 * Looking for test storage... 00:15:50.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.378 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:50.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.638 --rc genhtml_branch_coverage=1 00:15:50.638 --rc genhtml_function_coverage=1 00:15:50.638 --rc genhtml_legend=1 00:15:50.638 --rc geninfo_all_blocks=1 00:15:50.638 --rc geninfo_unexecuted_blocks=1 00:15:50.638 00:15:50.638 ' 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:50.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.638 --rc genhtml_branch_coverage=1 00:15:50.638 --rc genhtml_function_coverage=1 00:15:50.638 --rc genhtml_legend=1 00:15:50.638 --rc geninfo_all_blocks=1 00:15:50.638 --rc geninfo_unexecuted_blocks=1 00:15:50.638 00:15:50.638 ' 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:50.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.638 --rc genhtml_branch_coverage=1 00:15:50.638 --rc genhtml_function_coverage=1 00:15:50.638 --rc genhtml_legend=1 00:15:50.638 --rc geninfo_all_blocks=1 00:15:50.638 --rc geninfo_unexecuted_blocks=1 00:15:50.638 00:15:50.638 ' 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:50.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.638 --rc genhtml_branch_coverage=1 00:15:50.638 --rc genhtml_function_coverage=1 00:15:50.638 --rc genhtml_legend=1 00:15:50.638 --rc geninfo_all_blocks=1 00:15:50.638 --rc geninfo_unexecuted_blocks=1 00:15:50.638 00:15:50.638 ' 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.638 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.639 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:50.639 Cannot find device "nvmf_init_br" 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:50.639 Cannot find device "nvmf_init_br2" 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:50.639 Cannot find device "nvmf_tgt_br" 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.639 Cannot find device "nvmf_tgt_br2" 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:50.639 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:50.639 Cannot find device "nvmf_init_br" 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:50.639 Cannot find device "nvmf_init_br2" 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:50.639 Cannot find device "nvmf_tgt_br" 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:50.639 Cannot find device "nvmf_tgt_br2" 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:50.639 Cannot find device "nvmf_br" 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:50.639 Cannot find device "nvmf_init_if" 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:50.639 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:50.898 Cannot find device "nvmf_init_if2" 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:50.898 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:51.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.147 ms 00:15:51.157 00:15:51.157 --- 10.0.0.3 ping statistics --- 00:15:51.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.157 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:51.157 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:51.157 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.144 ms 00:15:51.157 00:15:51.157 --- 10.0.0.4 ping statistics --- 00:15:51.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.157 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:15:51.157 00:15:51.157 --- 10.0.0.1 ping statistics --- 00:15:51.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.157 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:51.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:15:51.157 00:15:51.157 --- 10.0.0.2 ping statistics --- 00:15:51.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.157 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83063 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83063 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83063 ']' 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.157 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.158 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.158 [2024-11-20 15:23:37.575531] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:15:51.158 [2024-11-20 15:23:37.575664] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.416 [2024-11-20 15:23:37.724301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.416 [2024-11-20 15:23:37.806992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.416 [2024-11-20 15:23:37.807070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.416 [2024-11-20 15:23:37.807081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.416 [2024-11-20 15:23:37.807089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.416 [2024-11-20 15:23:37.807096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.416 [2024-11-20 15:23:37.807507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.353 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.353 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:52.353 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:52.353 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:52.353 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.353 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:52.353 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:52.612 true 00:15:52.612 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:52.612 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:52.875 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:52.875 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:52.875 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:53.134 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.134 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:53.393 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:53.393 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:53.393 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:53.652 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.652 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:53.910 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:53.910 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:53.910 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.910 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:54.169 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:54.169 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:54.169 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:54.427 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.427 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:54.687 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:54.687 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:54.687 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:54.947 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:54.947 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.YYxD6CufUe 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.XbzFuFoDJS 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.YYxD6CufUe 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.XbzFuFoDJS 00:15:55.206 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:55.466 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:56.036 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.YYxD6CufUe 00:15:56.036 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YYxD6CufUe 00:15:56.036 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:56.294 [2024-11-20 15:23:42.542498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.294 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:56.606 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:56.606 [2024-11-20 15:23:43.026301] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:56.606 [2024-11-20 15:23:43.026579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:56.606 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:56.879 malloc0 00:15:56.879 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:57.137 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YYxD6CufUe 00:15:57.395 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:57.653 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.YYxD6CufUe 00:16:09.852 Initializing NVMe Controllers 00:16:09.852 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:09.852 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:09.852 Initialization complete. Launching workers. 00:16:09.852 ======================================================== 00:16:09.852 Latency(us) 00:16:09.852 Device Information : IOPS MiB/s Average min max 00:16:09.852 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13753.46 53.72 4655.09 1000.00 47468.95 00:16:09.852 ======================================================== 00:16:09.852 Total : 13753.46 53.72 4655.09 1000.00 47468.95 00:16:09.852 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YYxD6CufUe 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YYxD6CufUe 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83430 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83430 /var/tmp/bdevperf.sock 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83430 ']' 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:09.852 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:09.852 [2024-11-20 15:23:54.403113] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:09.852 [2024-11-20 15:23:54.403220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83430 ] 00:16:09.852 [2024-11-20 15:23:54.541074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.852 [2024-11-20 15:23:54.642807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.852 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.852 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:09.852 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YYxD6CufUe 00:16:09.852 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:09.852 [2024-11-20 15:23:55.788151] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.852 TLSTESTn1 00:16:09.852 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:09.852 Running I/O for 10 seconds... 00:16:11.727 5194.00 IOPS, 20.29 MiB/s [2024-11-20T15:23:59.178Z] 5200.00 IOPS, 20.31 MiB/s [2024-11-20T15:24:00.132Z] 5197.33 IOPS, 20.30 MiB/s [2024-11-20T15:24:01.069Z] 5196.75 IOPS, 20.30 MiB/s [2024-11-20T15:24:02.005Z] 5168.80 IOPS, 20.19 MiB/s [2024-11-20T15:24:03.383Z] 5167.33 IOPS, 20.18 MiB/s [2024-11-20T15:24:04.320Z] 5159.00 IOPS, 20.15 MiB/s [2024-11-20T15:24:05.257Z] 5160.00 IOPS, 20.16 MiB/s [2024-11-20T15:24:06.195Z] 5171.33 IOPS, 20.20 MiB/s [2024-11-20T15:24:06.195Z] 5179.30 IOPS, 20.23 MiB/s 00:16:19.713 Latency(us) 00:16:19.713 [2024-11-20T15:24:06.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.713 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:19.713 Verification LBA range: start 0x0 length 0x2000 00:16:19.713 TLSTESTn1 : 10.01 5185.62 20.26 0.00 0.00 24647.97 4132.19 19160.73 00:16:19.713 [2024-11-20T15:24:06.195Z] =================================================================================================================== 00:16:19.713 [2024-11-20T15:24:06.195Z] Total : 5185.62 20.26 0.00 0.00 24647.97 4132.19 19160.73 00:16:19.713 { 00:16:19.713 "results": [ 00:16:19.713 { 00:16:19.713 "job": "TLSTESTn1", 00:16:19.713 "core_mask": "0x4", 00:16:19.713 "workload": "verify", 00:16:19.713 "status": "finished", 00:16:19.713 "verify_range": { 00:16:19.713 "start": 0, 00:16:19.713 "length": 8192 00:16:19.713 }, 00:16:19.713 "queue_depth": 128, 00:16:19.713 "io_size": 4096, 00:16:19.713 "runtime": 10.012488, 00:16:19.713 "iops": 5185.624192508396, 00:16:19.713 "mibps": 20.25634450198592, 00:16:19.713 "io_failed": 0, 00:16:19.713 "io_timeout": 0, 00:16:19.713 "avg_latency_us": 24647.969780348256, 00:16:19.713 "min_latency_us": 4132.189558232932, 00:16:19.713 "max_latency_us": 19160.72610441767 00:16:19.714 } 00:16:19.714 ], 00:16:19.714 "core_count": 1 00:16:19.714 } 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83430 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83430 ']' 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83430 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83430 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:19.714 killing process with pid 83430 00:16:19.714 Received shutdown signal, test time was about 10.000000 seconds 00:16:19.714 00:16:19.714 Latency(us) 00:16:19.714 [2024-11-20T15:24:06.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.714 [2024-11-20T15:24:06.196Z] =================================================================================================================== 00:16:19.714 [2024-11-20T15:24:06.196Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83430' 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83430 00:16:19.714 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83430 00:16:19.973 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XbzFuFoDJS 00:16:19.973 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:19.973 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XbzFuFoDJS 00:16:19.973 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:19.973 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.973 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:19.973 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XbzFuFoDJS 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XbzFuFoDJS 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83590 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83590 /var/tmp/bdevperf.sock 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83590 ']' 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:19.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.974 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.974 [2024-11-20 15:24:06.389929] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:19.974 [2024-11-20 15:24:06.390352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83590 ] 00:16:20.233 [2024-11-20 15:24:06.533194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.233 [2024-11-20 15:24:06.618752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.171 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.171 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:21.171 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XbzFuFoDJS 00:16:21.171 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:21.430 [2024-11-20 15:24:07.842500] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.430 [2024-11-20 15:24:07.848592] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:21.430 [2024-11-20 15:24:07.849430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d8ac0 (107): Transport endpoint is not connected 00:16:21.430 [2024-11-20 15:24:07.850401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d8ac0 (9): Bad file descriptor 00:16:21.430 [2024-11-20 15:24:07.851396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:21.430 [2024-11-20 15:24:07.851612] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:21.430 [2024-11-20 15:24:07.851894] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:21.430 [2024-11-20 15:24:07.852106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:21.430 2024/11/20 15:24:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:21.430 request: 00:16:21.430 { 00:16:21.430 "method": "bdev_nvme_attach_controller", 00:16:21.430 "params": { 00:16:21.430 "name": "TLSTEST", 00:16:21.430 "trtype": "tcp", 00:16:21.430 "traddr": "10.0.0.3", 00:16:21.430 "adrfam": "ipv4", 00:16:21.430 "trsvcid": "4420", 00:16:21.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.430 "prchk_reftag": false, 00:16:21.430 "prchk_guard": false, 00:16:21.430 "hdgst": false, 00:16:21.430 "ddgst": false, 00:16:21.430 "psk": "key0", 00:16:21.430 "allow_unrecognized_csi": false 00:16:21.430 } 00:16:21.430 } 00:16:21.430 Got JSON-RPC error response 00:16:21.430 GoRPCClient: error on JSON-RPC call 00:16:21.430 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83590 00:16:21.430 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83590 ']' 00:16:21.430 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83590 00:16:21.430 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:21.430 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.430 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83590 00:16:21.690 killing process with pid 83590 00:16:21.690 Received shutdown signal, test time was about 10.000000 seconds 00:16:21.690 00:16:21.690 Latency(us) 00:16:21.690 [2024-11-20T15:24:08.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.690 [2024-11-20T15:24:08.172Z] =================================================================================================================== 00:16:21.690 [2024-11-20T15:24:08.172Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:21.690 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:21.690 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:21.690 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83590' 00:16:21.690 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83590 00:16:21.690 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83590 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YYxD6CufUe 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YYxD6CufUe 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YYxD6CufUe 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YYxD6CufUe 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83648 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83648 /var/tmp/bdevperf.sock 00:16:21.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83648 ']' 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.950 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.950 [2024-11-20 15:24:08.264088] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:21.950 [2024-11-20 15:24:08.264454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83648 ] 00:16:21.950 [2024-11-20 15:24:08.406531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.209 [2024-11-20 15:24:08.492153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.777 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.777 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:22.777 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YYxD6CufUe 00:16:23.035 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:23.297 [2024-11-20 15:24:09.687862] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:23.297 [2024-11-20 15:24:09.695274] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:23.297 [2024-11-20 15:24:09.695841] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:23.297 [2024-11-20 15:24:09.695998] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:23.297 [2024-11-20 15:24:09.696143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691ac0 (107): Transport endpoint is not connected 00:16:23.297 [2024-11-20 15:24:09.697127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691ac0 (9): Bad file descriptor 00:16:23.297 [2024-11-20 15:24:09.698119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:23.297 [2024-11-20 15:24:09.698224] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:23.297 [2024-11-20 15:24:09.698287] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:23.297 [2024-11-20 15:24:09.698361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:23.297 2024/11/20 15:24:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:23.297 request: 00:16:23.297 { 00:16:23.297 "method": "bdev_nvme_attach_controller", 00:16:23.297 "params": { 00:16:23.297 "name": "TLSTEST", 00:16:23.297 "trtype": "tcp", 00:16:23.297 "traddr": "10.0.0.3", 00:16:23.297 "adrfam": "ipv4", 00:16:23.297 "trsvcid": "4420", 00:16:23.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.297 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:23.297 "prchk_reftag": false, 00:16:23.297 "prchk_guard": false, 00:16:23.297 "hdgst": false, 00:16:23.297 "ddgst": false, 00:16:23.297 "psk": "key0", 00:16:23.297 "allow_unrecognized_csi": false 00:16:23.297 } 00:16:23.297 } 00:16:23.297 Got JSON-RPC error response 00:16:23.297 GoRPCClient: error on JSON-RPC call 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83648 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83648 ']' 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83648 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83648 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:23.297 killing process with pid 83648 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83648' 00:16:23.297 Received shutdown signal, test time was about 10.000000 seconds 00:16:23.297 00:16:23.297 Latency(us) 00:16:23.297 [2024-11-20T15:24:09.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.297 [2024-11-20T15:24:09.779Z] =================================================================================================================== 00:16:23.297 [2024-11-20T15:24:09.779Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:23.297 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83648 00:16:23.556 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83648 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YYxD6CufUe 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YYxD6CufUe 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YYxD6CufUe 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YYxD6CufUe 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83700 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83700 /var/tmp/bdevperf.sock 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83700 ']' 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.815 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.815 [2024-11-20 15:24:10.113856] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:23.815 [2024-11-20 15:24:10.113963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83700 ] 00:16:23.815 [2024-11-20 15:24:10.270042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.074 [2024-11-20 15:24:10.354138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.642 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.642 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:24.642 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YYxD6CufUe 00:16:24.901 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:25.161 [2024-11-20 15:24:11.512319] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:25.161 [2024-11-20 15:24:11.522619] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:25.161 [2024-11-20 15:24:11.522671] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:25.161 [2024-11-20 15:24:11.522739] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:25.161 [2024-11-20 15:24:11.523086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ac0 (107): Transport endpoint is not connected 00:16:25.161 [2024-11-20 15:24:11.524065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ac0 (9): Bad file descriptor 00:16:25.161 [2024-11-20 15:24:11.525061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:25.161 [2024-11-20 15:24:11.525088] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:25.161 [2024-11-20 15:24:11.525100] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:25.161 [2024-11-20 15:24:11.525119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:25.161 2024/11/20 15:24:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:25.161 request: 00:16:25.161 { 00:16:25.161 "method": "bdev_nvme_attach_controller", 00:16:25.161 "params": { 00:16:25.161 "name": "TLSTEST", 00:16:25.161 "trtype": "tcp", 00:16:25.161 "traddr": "10.0.0.3", 00:16:25.161 "adrfam": "ipv4", 00:16:25.161 "trsvcid": "4420", 00:16:25.161 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:25.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.161 "prchk_reftag": false, 00:16:25.161 "prchk_guard": false, 00:16:25.161 "hdgst": false, 00:16:25.161 "ddgst": false, 00:16:25.161 "psk": "key0", 00:16:25.161 "allow_unrecognized_csi": false 00:16:25.161 } 00:16:25.161 } 00:16:25.161 Got JSON-RPC error response 00:16:25.161 GoRPCClient: error on JSON-RPC call 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83700 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83700 ']' 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83700 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83700 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83700' 00:16:25.161 killing process with pid 83700 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83700 00:16:25.161 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.161 00:16:25.161 Latency(us) 00:16:25.161 [2024-11-20T15:24:11.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.161 [2024-11-20T15:24:11.643Z] =================================================================================================================== 00:16:25.161 [2024-11-20T15:24:11.643Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:25.161 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83700 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83755 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83755 /var/tmp/bdevperf.sock 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83755 ']' 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.421 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.681 [2024-11-20 15:24:11.934784] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:25.681 [2024-11-20 15:24:11.934893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83755 ] 00:16:25.681 [2024-11-20 15:24:12.077305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.939 [2024-11-20 15:24:12.161567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.507 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.507 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:26.507 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:26.766 [2024-11-20 15:24:13.120343] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:26.766 [2024-11-20 15:24:13.120413] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:26.766 2024/11/20 15:24:13 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:26.766 request: 00:16:26.766 { 00:16:26.766 "method": "keyring_file_add_key", 00:16:26.766 "params": { 00:16:26.766 "name": "key0", 00:16:26.766 "path": "" 00:16:26.766 } 00:16:26.766 } 00:16:26.766 Got JSON-RPC error response 00:16:26.766 GoRPCClient: error on JSON-RPC call 00:16:26.766 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:27.025 [2024-11-20 15:24:13.376114] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:27.025 [2024-11-20 15:24:13.376196] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:27.025 2024/11/20 15:24:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:27.025 request: 00:16:27.025 { 00:16:27.025 "method": "bdev_nvme_attach_controller", 00:16:27.025 "params": { 00:16:27.025 "name": "TLSTEST", 00:16:27.025 "trtype": "tcp", 00:16:27.025 "traddr": "10.0.0.3", 00:16:27.025 "adrfam": "ipv4", 00:16:27.026 "trsvcid": "4420", 00:16:27.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.026 "prchk_reftag": false, 00:16:27.026 "prchk_guard": false, 00:16:27.026 "hdgst": false, 00:16:27.026 "ddgst": false, 00:16:27.026 "psk": "key0", 00:16:27.026 "allow_unrecognized_csi": false 00:16:27.026 } 00:16:27.026 } 00:16:27.026 Got JSON-RPC error response 00:16:27.026 GoRPCClient: error on JSON-RPC call 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83755 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83755 ']' 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83755 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83755 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83755' 00:16:27.026 killing process with pid 83755 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83755 00:16:27.026 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.026 00:16:27.026 Latency(us) 00:16:27.026 [2024-11-20T15:24:13.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.026 [2024-11-20T15:24:13.508Z] =================================================================================================================== 00:16:27.026 [2024-11-20T15:24:13.508Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.026 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83755 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83063 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83063 ']' 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83063 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.286 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83063 00:16:27.544 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:27.544 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:27.545 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83063' 00:16:27.545 killing process with pid 83063 00:16:27.545 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83063 00:16:27.545 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83063 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.BdodikX28j 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.BdodikX28j 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83823 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83823 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83823 ']' 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.803 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.804 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.804 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.804 [2024-11-20 15:24:14.182470] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:27.804 [2024-11-20 15:24:14.182571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.062 [2024-11-20 15:24:14.317764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.062 [2024-11-20 15:24:14.398414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.062 [2024-11-20 15:24:14.398492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.062 [2024-11-20 15:24:14.398503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.062 [2024-11-20 15:24:14.398511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.062 [2024-11-20 15:24:14.398519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.062 [2024-11-20 15:24:14.398941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.BdodikX28j 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BdodikX28j 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:29.036 [2024-11-20 15:24:15.411001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.036 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:29.295 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:29.554 [2024-11-20 15:24:15.914300] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:29.554 [2024-11-20 15:24:15.914569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:29.554 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:29.814 malloc0 00:16:29.814 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:30.074 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:16:30.333 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BdodikX28j 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BdodikX28j 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83938 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83938 /var/tmp/bdevperf.sock 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83938 ']' 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.594 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.594 [2024-11-20 15:24:17.001163] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:30.594 [2024-11-20 15:24:17.001273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83938 ] 00:16:30.853 [2024-11-20 15:24:17.151716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.853 [2024-11-20 15:24:17.235557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.788 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.788 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:31.788 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:16:31.788 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:32.047 [2024-11-20 15:24:18.369995] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:32.047 TLSTESTn1 00:16:32.047 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:32.308 Running I/O for 10 seconds... 00:16:34.186 5151.00 IOPS, 20.12 MiB/s [2024-11-20T15:24:21.608Z] 5187.50 IOPS, 20.26 MiB/s [2024-11-20T15:24:22.985Z] 5199.00 IOPS, 20.31 MiB/s [2024-11-20T15:24:23.921Z] 5203.75 IOPS, 20.33 MiB/s [2024-11-20T15:24:24.858Z] 5208.60 IOPS, 20.35 MiB/s [2024-11-20T15:24:25.836Z] 5220.67 IOPS, 20.39 MiB/s [2024-11-20T15:24:26.773Z] 5235.00 IOPS, 20.45 MiB/s [2024-11-20T15:24:27.711Z] 5241.00 IOPS, 20.47 MiB/s [2024-11-20T15:24:28.651Z] 5243.56 IOPS, 20.48 MiB/s [2024-11-20T15:24:28.651Z] 5244.70 IOPS, 20.49 MiB/s 00:16:42.169 Latency(us) 00:16:42.169 [2024-11-20T15:24:28.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.169 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:42.169 Verification LBA range: start 0x0 length 0x2000 00:16:42.169 TLSTESTn1 : 10.01 5250.96 20.51 0.00 0.00 24341.49 3632.12 20002.96 00:16:42.169 [2024-11-20T15:24:28.651Z] =================================================================================================================== 00:16:42.169 [2024-11-20T15:24:28.651Z] Total : 5250.96 20.51 0.00 0.00 24341.49 3632.12 20002.96 00:16:42.169 { 00:16:42.169 "results": [ 00:16:42.169 { 00:16:42.169 "job": "TLSTESTn1", 00:16:42.169 "core_mask": "0x4", 00:16:42.169 "workload": "verify", 00:16:42.169 "status": "finished", 00:16:42.169 "verify_range": { 00:16:42.169 "start": 0, 00:16:42.169 "length": 8192 00:16:42.169 }, 00:16:42.169 "queue_depth": 128, 00:16:42.169 "io_size": 4096, 00:16:42.169 "runtime": 10.012461, 00:16:42.169 "iops": 5250.956782753012, 00:16:42.169 "mibps": 20.511549932628952, 00:16:42.169 "io_failed": 0, 00:16:42.169 "io_timeout": 0, 00:16:42.169 "avg_latency_us": 24341.49191295663, 00:16:42.169 "min_latency_us": 3632.115662650602, 00:16:42.169 "max_latency_us": 20002.955823293174 00:16:42.169 } 00:16:42.169 ], 00:16:42.169 "core_count": 1 00:16:42.169 } 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83938 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83938 ']' 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83938 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83938 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:42.169 killing process with pid 83938 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83938' 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83938 00:16:42.169 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.169 00:16:42.169 Latency(us) 00:16:42.169 [2024-11-20T15:24:28.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.169 [2024-11-20T15:24:28.651Z] =================================================================================================================== 00:16:42.169 [2024-11-20T15:24:28.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.169 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83938 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.BdodikX28j 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BdodikX28j 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BdodikX28j 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BdodikX28j 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BdodikX28j 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84100 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84100 /var/tmp/bdevperf.sock 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84100 ']' 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.739 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.739 [2024-11-20 15:24:28.984468] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:42.739 [2024-11-20 15:24:28.984572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84100 ] 00:16:42.739 [2024-11-20 15:24:29.136086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.997 [2024-11-20 15:24:29.220703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.566 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.566 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:43.566 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:16:43.862 [2024-11-20 15:24:30.132180] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BdodikX28j': 0100666 00:16:43.862 [2024-11-20 15:24:30.132240] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:43.862 2024/11/20 15:24:30 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.BdodikX28j], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:43.862 request: 00:16:43.862 { 00:16:43.862 "method": "keyring_file_add_key", 00:16:43.862 "params": { 00:16:43.862 "name": "key0", 00:16:43.862 "path": "/tmp/tmp.BdodikX28j" 00:16:43.862 } 00:16:43.862 } 00:16:43.862 Got JSON-RPC error response 00:16:43.862 GoRPCClient: error on JSON-RPC call 00:16:43.862 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:44.125 [2024-11-20 15:24:30.359992] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:44.125 [2024-11-20 15:24:30.360334] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:44.125 2024/11/20 15:24:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:44.125 request: 00:16:44.125 { 00:16:44.125 "method": "bdev_nvme_attach_controller", 00:16:44.125 "params": { 00:16:44.125 "name": "TLSTEST", 00:16:44.125 "trtype": "tcp", 00:16:44.125 "traddr": "10.0.0.3", 00:16:44.125 "adrfam": "ipv4", 00:16:44.125 "trsvcid": "4420", 00:16:44.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.125 "prchk_reftag": false, 00:16:44.125 "prchk_guard": false, 00:16:44.125 "hdgst": false, 00:16:44.125 "ddgst": false, 00:16:44.125 "psk": "key0", 00:16:44.125 "allow_unrecognized_csi": false 00:16:44.125 } 00:16:44.125 } 00:16:44.125 Got JSON-RPC error response 00:16:44.125 GoRPCClient: error on JSON-RPC call 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84100 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84100 ']' 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84100 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84100 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84100' 00:16:44.125 killing process with pid 84100 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84100 00:16:44.125 Received shutdown signal, test time was about 10.000000 seconds 00:16:44.125 00:16:44.125 Latency(us) 00:16:44.125 [2024-11-20T15:24:30.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.125 [2024-11-20T15:24:30.607Z] =================================================================================================================== 00:16:44.125 [2024-11-20T15:24:30.607Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:44.125 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84100 00:16:44.384 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:44.384 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:44.384 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.384 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.384 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83823 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83823 ']' 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83823 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83823 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83823' 00:16:44.385 killing process with pid 83823 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83823 00:16:44.385 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83823 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84163 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84163 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84163 ']' 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.644 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.644 [2024-11-20 15:24:31.087004] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:44.644 [2024-11-20 15:24:31.087110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.904 [2024-11-20 15:24:31.242270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.904 [2024-11-20 15:24:31.323625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.904 [2024-11-20 15:24:31.324007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.904 [2024-11-20 15:24:31.324149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.904 [2024-11-20 15:24:31.324261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.904 [2024-11-20 15:24:31.324358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.904 [2024-11-20 15:24:31.324880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.BdodikX28j 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BdodikX28j 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.BdodikX28j 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BdodikX28j 00:16:45.840 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:45.840 [2024-11-20 15:24:32.296129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.098 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:46.098 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:46.356 [2024-11-20 15:24:32.755471] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:46.356 [2024-11-20 15:24:32.755819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:46.356 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:46.615 malloc0 00:16:46.615 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:46.875 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:16:47.134 [2024-11-20 15:24:33.501564] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BdodikX28j': 0100666 00:16:47.134 [2024-11-20 15:24:33.501625] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:47.134 2024/11/20 15:24:33 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.BdodikX28j], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:47.134 request: 00:16:47.134 { 00:16:47.134 "method": "keyring_file_add_key", 00:16:47.134 "params": { 00:16:47.135 "name": "key0", 00:16:47.135 "path": "/tmp/tmp.BdodikX28j" 00:16:47.135 } 00:16:47.135 } 00:16:47.135 Got JSON-RPC error response 00:16:47.135 GoRPCClient: error on JSON-RPC call 00:16:47.135 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:47.394 [2024-11-20 15:24:33.733252] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:47.394 [2024-11-20 15:24:33.733647] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:47.394 2024/11/20 15:24:33 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:47.394 request: 00:16:47.394 { 00:16:47.394 "method": "nvmf_subsystem_add_host", 00:16:47.394 "params": { 00:16:47.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.394 "host": "nqn.2016-06.io.spdk:host1", 00:16:47.394 "psk": "key0" 00:16:47.394 } 00:16:47.394 } 00:16:47.394 Got JSON-RPC error response 00:16:47.394 GoRPCClient: error on JSON-RPC call 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 84163 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84163 ']' 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84163 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84163 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84163' 00:16:47.394 killing process with pid 84163 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84163 00:16:47.394 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84163 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.BdodikX28j 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84285 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84285 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84285 ']' 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.816 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.816 [2024-11-20 15:24:34.170548] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:47.816 [2024-11-20 15:24:34.171445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.097 [2024-11-20 15:24:34.310244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.097 [2024-11-20 15:24:34.402669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.097 [2024-11-20 15:24:34.402761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.097 [2024-11-20 15:24:34.402772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.097 [2024-11-20 15:24:34.402781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.097 [2024-11-20 15:24:34.402789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.097 [2024-11-20 15:24:34.403194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.665 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.665 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:48.665 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:48.665 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:48.665 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.924 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.924 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.BdodikX28j 00:16:48.924 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BdodikX28j 00:16:48.924 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:49.183 [2024-11-20 15:24:35.430389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.183 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:49.442 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:49.442 [2024-11-20 15:24:35.897754] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.442 [2024-11-20 15:24:35.898013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:49.701 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:49.701 malloc0 00:16:49.960 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:49.960 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:16:50.219 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84392 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84392 /var/tmp/bdevperf.sock 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84392 ']' 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.477 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.477 [2024-11-20 15:24:36.945780] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:50.477 [2024-11-20 15:24:36.945897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84392 ] 00:16:50.735 [2024-11-20 15:24:37.103887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.735 [2024-11-20 15:24:37.187334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.671 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.671 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:51.671 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:16:51.671 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:51.931 [2024-11-20 15:24:38.328555] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.931 TLSTESTn1 00:16:52.191 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:52.451 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:52.451 "subsystems": [ 00:16:52.451 { 00:16:52.451 "subsystem": "keyring", 00:16:52.451 "config": [ 00:16:52.451 { 00:16:52.451 "method": "keyring_file_add_key", 00:16:52.451 "params": { 00:16:52.451 "name": "key0", 00:16:52.451 "path": "/tmp/tmp.BdodikX28j" 00:16:52.451 } 00:16:52.451 } 00:16:52.451 ] 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "subsystem": "iobuf", 00:16:52.451 "config": [ 00:16:52.451 { 00:16:52.451 "method": "iobuf_set_options", 00:16:52.451 "params": { 00:16:52.451 "enable_numa": false, 00:16:52.451 "large_bufsize": 135168, 00:16:52.451 "large_pool_count": 1024, 00:16:52.451 "small_bufsize": 8192, 00:16:52.451 "small_pool_count": 8192 00:16:52.451 } 00:16:52.451 } 00:16:52.451 ] 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "subsystem": "sock", 00:16:52.451 "config": [ 00:16:52.451 { 00:16:52.451 "method": "sock_set_default_impl", 00:16:52.451 "params": { 00:16:52.451 "impl_name": "posix" 00:16:52.451 } 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "method": "sock_impl_set_options", 00:16:52.451 "params": { 00:16:52.451 "enable_ktls": false, 00:16:52.451 "enable_placement_id": 0, 00:16:52.451 "enable_quickack": false, 00:16:52.451 "enable_recv_pipe": true, 00:16:52.451 "enable_zerocopy_send_client": false, 00:16:52.451 "enable_zerocopy_send_server": true, 00:16:52.451 "impl_name": "ssl", 00:16:52.451 "recv_buf_size": 4096, 00:16:52.451 "send_buf_size": 4096, 00:16:52.451 "tls_version": 0, 00:16:52.451 "zerocopy_threshold": 0 00:16:52.451 } 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "method": "sock_impl_set_options", 00:16:52.451 "params": { 00:16:52.451 "enable_ktls": false, 00:16:52.451 "enable_placement_id": 0, 00:16:52.451 "enable_quickack": false, 00:16:52.451 "enable_recv_pipe": true, 00:16:52.451 "enable_zerocopy_send_client": false, 00:16:52.451 "enable_zerocopy_send_server": true, 00:16:52.451 "impl_name": "posix", 00:16:52.451 "recv_buf_size": 2097152, 00:16:52.451 "send_buf_size": 2097152, 00:16:52.451 "tls_version": 0, 00:16:52.451 "zerocopy_threshold": 0 00:16:52.451 } 00:16:52.451 } 00:16:52.451 ] 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "subsystem": "vmd", 00:16:52.451 "config": [] 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "subsystem": "accel", 00:16:52.451 "config": [ 00:16:52.451 { 00:16:52.451 "method": "accel_set_options", 00:16:52.451 "params": { 00:16:52.451 "buf_count": 2048, 00:16:52.451 "large_cache_size": 16, 00:16:52.451 "sequence_count": 2048, 00:16:52.451 "small_cache_size": 128, 00:16:52.451 "task_count": 2048 00:16:52.451 } 00:16:52.451 } 00:16:52.451 ] 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "subsystem": "bdev", 00:16:52.451 "config": [ 00:16:52.451 { 00:16:52.451 "method": "bdev_set_options", 00:16:52.451 "params": { 00:16:52.451 "bdev_auto_examine": true, 00:16:52.451 "bdev_io_cache_size": 256, 00:16:52.451 "bdev_io_pool_size": 65535, 00:16:52.451 "iobuf_large_cache_size": 16, 00:16:52.451 "iobuf_small_cache_size": 128 00:16:52.451 } 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "method": "bdev_raid_set_options", 00:16:52.451 "params": { 00:16:52.451 "process_max_bandwidth_mb_sec": 0, 00:16:52.451 "process_window_size_kb": 1024 00:16:52.451 } 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "method": "bdev_iscsi_set_options", 00:16:52.451 "params": { 00:16:52.451 "timeout_sec": 30 00:16:52.451 } 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "method": "bdev_nvme_set_options", 00:16:52.451 "params": { 00:16:52.451 "action_on_timeout": "none", 00:16:52.451 "allow_accel_sequence": false, 00:16:52.451 "arbitration_burst": 0, 00:16:52.451 "bdev_retry_count": 3, 00:16:52.451 "ctrlr_loss_timeout_sec": 0, 00:16:52.451 "delay_cmd_submit": true, 00:16:52.451 "dhchap_dhgroups": [ 00:16:52.451 "null", 00:16:52.451 "ffdhe2048", 00:16:52.451 "ffdhe3072", 00:16:52.451 "ffdhe4096", 00:16:52.451 "ffdhe6144", 00:16:52.451 "ffdhe8192" 00:16:52.451 ], 00:16:52.451 "dhchap_digests": [ 00:16:52.451 "sha256", 00:16:52.451 "sha384", 00:16:52.451 "sha512" 00:16:52.451 ], 00:16:52.451 "disable_auto_failback": false, 00:16:52.451 "fast_io_fail_timeout_sec": 0, 00:16:52.451 "generate_uuids": false, 00:16:52.451 "high_priority_weight": 0, 00:16:52.451 "io_path_stat": false, 00:16:52.451 "io_queue_requests": 0, 00:16:52.451 "keep_alive_timeout_ms": 10000, 00:16:52.451 "low_priority_weight": 0, 00:16:52.451 "medium_priority_weight": 0, 00:16:52.451 "nvme_adminq_poll_period_us": 10000, 00:16:52.451 "nvme_error_stat": false, 00:16:52.451 "nvme_ioq_poll_period_us": 0, 00:16:52.451 "rdma_cm_event_timeout_ms": 0, 00:16:52.451 "rdma_max_cq_size": 0, 00:16:52.451 "rdma_srq_size": 0, 00:16:52.451 "reconnect_delay_sec": 0, 00:16:52.451 "timeout_admin_us": 0, 00:16:52.451 "timeout_us": 0, 00:16:52.451 "transport_ack_timeout": 0, 00:16:52.451 "transport_retry_count": 4, 00:16:52.451 "transport_tos": 0 00:16:52.451 } 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "method": "bdev_nvme_set_hotplug", 00:16:52.451 "params": { 00:16:52.451 "enable": false, 00:16:52.451 "period_us": 100000 00:16:52.451 } 00:16:52.451 }, 00:16:52.451 { 00:16:52.451 "method": "bdev_malloc_create", 00:16:52.451 "params": { 00:16:52.451 "block_size": 4096, 00:16:52.451 "dif_is_head_of_md": false, 00:16:52.451 "dif_pi_format": 0, 00:16:52.451 "dif_type": 0, 00:16:52.451 "md_size": 0, 00:16:52.451 "name": "malloc0", 00:16:52.451 "num_blocks": 8192, 00:16:52.451 "optimal_io_boundary": 0, 00:16:52.451 "physical_block_size": 4096, 00:16:52.451 "uuid": "8507d28e-e807-41f7-8402-653b28c708f5" 00:16:52.451 } 00:16:52.451 }, 00:16:52.451 { 00:16:52.452 "method": "bdev_wait_for_examine" 00:16:52.452 } 00:16:52.452 ] 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "subsystem": "nbd", 00:16:52.452 "config": [] 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "subsystem": "scheduler", 00:16:52.452 "config": [ 00:16:52.452 { 00:16:52.452 "method": "framework_set_scheduler", 00:16:52.452 "params": { 00:16:52.452 "name": "static" 00:16:52.452 } 00:16:52.452 } 00:16:52.452 ] 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "subsystem": "nvmf", 00:16:52.452 "config": [ 00:16:52.452 { 00:16:52.452 "method": "nvmf_set_config", 00:16:52.452 "params": { 00:16:52.452 "admin_cmd_passthru": { 00:16:52.452 "identify_ctrlr": false 00:16:52.452 }, 00:16:52.452 "dhchap_dhgroups": [ 00:16:52.452 "null", 00:16:52.452 "ffdhe2048", 00:16:52.452 "ffdhe3072", 00:16:52.452 "ffdhe4096", 00:16:52.452 "ffdhe6144", 00:16:52.452 "ffdhe8192" 00:16:52.452 ], 00:16:52.452 "dhchap_digests": [ 00:16:52.452 "sha256", 00:16:52.452 "sha384", 00:16:52.452 "sha512" 00:16:52.452 ], 00:16:52.452 "discovery_filter": "match_any" 00:16:52.452 } 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "method": "nvmf_set_max_subsystems", 00:16:52.452 "params": { 00:16:52.452 "max_subsystems": 1024 00:16:52.452 } 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "method": "nvmf_set_crdt", 00:16:52.452 "params": { 00:16:52.452 "crdt1": 0, 00:16:52.452 "crdt2": 0, 00:16:52.452 "crdt3": 0 00:16:52.452 } 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "method": "nvmf_create_transport", 00:16:52.452 "params": { 00:16:52.452 "abort_timeout_sec": 1, 00:16:52.452 "ack_timeout": 0, 00:16:52.452 "buf_cache_size": 4294967295, 00:16:52.452 "c2h_success": false, 00:16:52.452 "data_wr_pool_size": 0, 00:16:52.452 "dif_insert_or_strip": false, 00:16:52.452 "in_capsule_data_size": 4096, 00:16:52.452 "io_unit_size": 131072, 00:16:52.452 "max_aq_depth": 128, 00:16:52.452 "max_io_qpairs_per_ctrlr": 127, 00:16:52.452 "max_io_size": 131072, 00:16:52.452 "max_queue_depth": 128, 00:16:52.452 "num_shared_buffers": 511, 00:16:52.452 "sock_priority": 0, 00:16:52.452 "trtype": "TCP", 00:16:52.452 "zcopy": false 00:16:52.452 } 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "method": "nvmf_create_subsystem", 00:16:52.452 "params": { 00:16:52.452 "allow_any_host": false, 00:16:52.452 "ana_reporting": false, 00:16:52.452 "max_cntlid": 65519, 00:16:52.452 "max_namespaces": 10, 00:16:52.452 "min_cntlid": 1, 00:16:52.452 "model_number": "SPDK bdev Controller", 00:16:52.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.452 "serial_number": "SPDK00000000000001" 00:16:52.452 } 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "method": "nvmf_subsystem_add_host", 00:16:52.452 "params": { 00:16:52.452 "host": "nqn.2016-06.io.spdk:host1", 00:16:52.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.452 "psk": "key0" 00:16:52.452 } 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "method": "nvmf_subsystem_add_ns", 00:16:52.452 "params": { 00:16:52.452 "namespace": { 00:16:52.452 "bdev_name": "malloc0", 00:16:52.452 "nguid": "8507D28EE80741F78402653B28C708F5", 00:16:52.452 "no_auto_visible": false, 00:16:52.452 "nsid": 1, 00:16:52.452 "uuid": "8507d28e-e807-41f7-8402-653b28c708f5" 00:16:52.452 }, 00:16:52.452 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:52.452 } 00:16:52.452 }, 00:16:52.452 { 00:16:52.452 "method": "nvmf_subsystem_add_listener", 00:16:52.452 "params": { 00:16:52.452 "listen_address": { 00:16:52.452 "adrfam": "IPv4", 00:16:52.452 "traddr": "10.0.0.3", 00:16:52.452 "trsvcid": "4420", 00:16:52.452 "trtype": "TCP" 00:16:52.452 }, 00:16:52.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.452 "secure_channel": true 00:16:52.452 } 00:16:52.452 } 00:16:52.452 ] 00:16:52.452 } 00:16:52.452 ] 00:16:52.452 }' 00:16:52.452 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:52.712 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:52.712 "subsystems": [ 00:16:52.712 { 00:16:52.712 "subsystem": "keyring", 00:16:52.712 "config": [ 00:16:52.712 { 00:16:52.712 "method": "keyring_file_add_key", 00:16:52.712 "params": { 00:16:52.712 "name": "key0", 00:16:52.712 "path": "/tmp/tmp.BdodikX28j" 00:16:52.712 } 00:16:52.712 } 00:16:52.712 ] 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "subsystem": "iobuf", 00:16:52.712 "config": [ 00:16:52.712 { 00:16:52.712 "method": "iobuf_set_options", 00:16:52.712 "params": { 00:16:52.712 "enable_numa": false, 00:16:52.712 "large_bufsize": 135168, 00:16:52.712 "large_pool_count": 1024, 00:16:52.712 "small_bufsize": 8192, 00:16:52.712 "small_pool_count": 8192 00:16:52.712 } 00:16:52.712 } 00:16:52.712 ] 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "subsystem": "sock", 00:16:52.712 "config": [ 00:16:52.712 { 00:16:52.712 "method": "sock_set_default_impl", 00:16:52.712 "params": { 00:16:52.712 "impl_name": "posix" 00:16:52.712 } 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "method": "sock_impl_set_options", 00:16:52.712 "params": { 00:16:52.712 "enable_ktls": false, 00:16:52.712 "enable_placement_id": 0, 00:16:52.712 "enable_quickack": false, 00:16:52.712 "enable_recv_pipe": true, 00:16:52.712 "enable_zerocopy_send_client": false, 00:16:52.712 "enable_zerocopy_send_server": true, 00:16:52.712 "impl_name": "ssl", 00:16:52.712 "recv_buf_size": 4096, 00:16:52.712 "send_buf_size": 4096, 00:16:52.712 "tls_version": 0, 00:16:52.712 "zerocopy_threshold": 0 00:16:52.712 } 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "method": "sock_impl_set_options", 00:16:52.712 "params": { 00:16:52.712 "enable_ktls": false, 00:16:52.712 "enable_placement_id": 0, 00:16:52.712 "enable_quickack": false, 00:16:52.712 "enable_recv_pipe": true, 00:16:52.712 "enable_zerocopy_send_client": false, 00:16:52.712 "enable_zerocopy_send_server": true, 00:16:52.712 "impl_name": "posix", 00:16:52.712 "recv_buf_size": 2097152, 00:16:52.712 "send_buf_size": 2097152, 00:16:52.712 "tls_version": 0, 00:16:52.712 "zerocopy_threshold": 0 00:16:52.712 } 00:16:52.712 } 00:16:52.712 ] 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "subsystem": "vmd", 00:16:52.712 "config": [] 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "subsystem": "accel", 00:16:52.712 "config": [ 00:16:52.712 { 00:16:52.712 "method": "accel_set_options", 00:16:52.712 "params": { 00:16:52.712 "buf_count": 2048, 00:16:52.712 "large_cache_size": 16, 00:16:52.712 "sequence_count": 2048, 00:16:52.712 "small_cache_size": 128, 00:16:52.712 "task_count": 2048 00:16:52.712 } 00:16:52.712 } 00:16:52.712 ] 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "subsystem": "bdev", 00:16:52.712 "config": [ 00:16:52.712 { 00:16:52.712 "method": "bdev_set_options", 00:16:52.712 "params": { 00:16:52.712 "bdev_auto_examine": true, 00:16:52.712 "bdev_io_cache_size": 256, 00:16:52.712 "bdev_io_pool_size": 65535, 00:16:52.712 "iobuf_large_cache_size": 16, 00:16:52.712 "iobuf_small_cache_size": 128 00:16:52.712 } 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "method": "bdev_raid_set_options", 00:16:52.712 "params": { 00:16:52.712 "process_max_bandwidth_mb_sec": 0, 00:16:52.712 "process_window_size_kb": 1024 00:16:52.712 } 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "method": "bdev_iscsi_set_options", 00:16:52.712 "params": { 00:16:52.712 "timeout_sec": 30 00:16:52.712 } 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "method": "bdev_nvme_set_options", 00:16:52.712 "params": { 00:16:52.712 "action_on_timeout": "none", 00:16:52.712 "allow_accel_sequence": false, 00:16:52.712 "arbitration_burst": 0, 00:16:52.712 "bdev_retry_count": 3, 00:16:52.712 "ctrlr_loss_timeout_sec": 0, 00:16:52.712 "delay_cmd_submit": true, 00:16:52.712 "dhchap_dhgroups": [ 00:16:52.712 "null", 00:16:52.712 "ffdhe2048", 00:16:52.712 "ffdhe3072", 00:16:52.712 "ffdhe4096", 00:16:52.712 "ffdhe6144", 00:16:52.712 "ffdhe8192" 00:16:52.712 ], 00:16:52.712 "dhchap_digests": [ 00:16:52.712 "sha256", 00:16:52.712 "sha384", 00:16:52.712 "sha512" 00:16:52.712 ], 00:16:52.712 "disable_auto_failback": false, 00:16:52.712 "fast_io_fail_timeout_sec": 0, 00:16:52.712 "generate_uuids": false, 00:16:52.712 "high_priority_weight": 0, 00:16:52.712 "io_path_stat": false, 00:16:52.712 "io_queue_requests": 512, 00:16:52.712 "keep_alive_timeout_ms": 10000, 00:16:52.712 "low_priority_weight": 0, 00:16:52.712 "medium_priority_weight": 0, 00:16:52.712 "nvme_adminq_poll_period_us": 10000, 00:16:52.712 "nvme_error_stat": false, 00:16:52.712 "nvme_ioq_poll_period_us": 0, 00:16:52.712 "rdma_cm_event_timeout_ms": 0, 00:16:52.712 "rdma_max_cq_size": 0, 00:16:52.713 "rdma_srq_size": 0, 00:16:52.713 "reconnect_delay_sec": 0, 00:16:52.713 "timeout_admin_us": 0, 00:16:52.713 "timeout_us": 0, 00:16:52.713 "transport_ack_timeout": 0, 00:16:52.713 "transport_retry_count": 4, 00:16:52.713 "transport_tos": 0 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "bdev_nvme_attach_controller", 00:16:52.713 "params": { 00:16:52.713 "adrfam": "IPv4", 00:16:52.713 "ctrlr_loss_timeout_sec": 0, 00:16:52.713 "ddgst": false, 00:16:52.713 "fast_io_fail_timeout_sec": 0, 00:16:52.713 "hdgst": false, 00:16:52.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.713 "multipath": "multipath", 00:16:52.713 "name": "TLSTEST", 00:16:52.713 "prchk_guard": false, 00:16:52.713 "prchk_reftag": false, 00:16:52.713 "psk": "key0", 00:16:52.713 "reconnect_delay_sec": 0, 00:16:52.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.713 "traddr": "10.0.0.3", 00:16:52.713 "trsvcid": "4420", 00:16:52.713 "trtype": "TCP" 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "bdev_nvme_set_hotplug", 00:16:52.713 "params": { 00:16:52.713 "enable": false, 00:16:52.713 "period_us": 100000 00:16:52.713 } 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "method": "bdev_wait_for_examine" 00:16:52.713 } 00:16:52.713 ] 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "subsystem": "nbd", 00:16:52.713 "config": [] 00:16:52.713 } 00:16:52.713 ] 00:16:52.713 }' 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84392 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84392 ']' 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84392 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84392 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:52.713 killing process with pid 84392 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84392' 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84392 00:16:52.713 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.713 00:16:52.713 Latency(us) 00:16:52.713 [2024-11-20T15:24:39.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.713 [2024-11-20T15:24:39.195Z] =================================================================================================================== 00:16:52.713 [2024-11-20T15:24:39.195Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:52.713 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84392 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84285 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84285 ']' 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84285 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84285 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:52.971 killing process with pid 84285 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84285' 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84285 00:16:52.971 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84285 00:16:53.539 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:53.540 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:53.540 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:53.540 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:53.540 "subsystems": [ 00:16:53.540 { 00:16:53.540 "subsystem": "keyring", 00:16:53.540 "config": [ 00:16:53.540 { 00:16:53.540 "method": "keyring_file_add_key", 00:16:53.540 "params": { 00:16:53.540 "name": "key0", 00:16:53.540 "path": "/tmp/tmp.BdodikX28j" 00:16:53.540 } 00:16:53.540 } 00:16:53.540 ] 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "subsystem": "iobuf", 00:16:53.540 "config": [ 00:16:53.540 { 00:16:53.540 "method": "iobuf_set_options", 00:16:53.540 "params": { 00:16:53.540 "enable_numa": false, 00:16:53.540 "large_bufsize": 135168, 00:16:53.540 "large_pool_count": 1024, 00:16:53.540 "small_bufsize": 8192, 00:16:53.540 "small_pool_count": 8192 00:16:53.540 } 00:16:53.540 } 00:16:53.540 ] 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "subsystem": "sock", 00:16:53.540 "config": [ 00:16:53.540 { 00:16:53.540 "method": "sock_set_default_impl", 00:16:53.540 "params": { 00:16:53.540 "impl_name": "posix" 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "sock_impl_set_options", 00:16:53.540 "params": { 00:16:53.540 "enable_ktls": false, 00:16:53.540 "enable_placement_id": 0, 00:16:53.540 "enable_quickack": false, 00:16:53.540 "enable_recv_pipe": true, 00:16:53.540 "enable_zerocopy_send_client": false, 00:16:53.540 "enable_zerocopy_send_server": true, 00:16:53.540 "impl_name": "ssl", 00:16:53.540 "recv_buf_size": 4096, 00:16:53.540 "send_buf_size": 4096, 00:16:53.540 "tls_version": 0, 00:16:53.540 "zerocopy_threshold": 0 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "sock_impl_set_options", 00:16:53.540 "params": { 00:16:53.540 "enable_ktls": false, 00:16:53.540 "enable_placement_id": 0, 00:16:53.540 "enable_quickack": false, 00:16:53.540 "enable_recv_pipe": true, 00:16:53.540 "enable_zerocopy_send_client": false, 00:16:53.540 "enable_zerocopy_send_server": true, 00:16:53.540 "impl_name": "posix", 00:16:53.540 "recv_buf_size": 2097152, 00:16:53.540 "send_buf_size": 2097152, 00:16:53.540 "tls_version": 0, 00:16:53.540 "zerocopy_threshold": 0 00:16:53.540 } 00:16:53.540 } 00:16:53.540 ] 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "subsystem": "vmd", 00:16:53.540 "config": [] 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "subsystem": "accel", 00:16:53.540 "config": [ 00:16:53.540 { 00:16:53.540 "method": "accel_set_options", 00:16:53.540 "params": { 00:16:53.540 "buf_count": 2048, 00:16:53.540 "large_cache_size": 16, 00:16:53.540 "sequence_count": 2048, 00:16:53.540 "small_cache_size": 128, 00:16:53.540 "task_count": 2048 00:16:53.540 } 00:16:53.540 } 00:16:53.540 ] 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "subsystem": "bdev", 00:16:53.540 "config": [ 00:16:53.540 { 00:16:53.540 "method": "bdev_set_options", 00:16:53.540 "params": { 00:16:53.540 "bdev_auto_examine": true, 00:16:53.540 "bdev_io_cache_size": 256, 00:16:53.540 "bdev_io_pool_size": 65535, 00:16:53.540 "iobuf_large_cache_size": 16, 00:16:53.540 "iobuf_small_cache_size": 128 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "bdev_raid_set_options", 00:16:53.540 "params": { 00:16:53.540 "process_max_bandwidth_mb_sec": 0, 00:16:53.540 "process_window_size_kb": 1024 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "bdev_iscsi_set_options", 00:16:53.540 "params": { 00:16:53.540 "timeout_sec": 30 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "bdev_nvme_set_options", 00:16:53.540 "params": { 00:16:53.540 "action_on_timeout": "none", 00:16:53.540 "allow_accel_sequence": false, 00:16:53.540 "arbitration_burst": 0, 00:16:53.540 "bdev_retry_count": 3, 00:16:53.540 "ctrlr_loss_timeout_sec": 0, 00:16:53.540 "delay_cmd_submit": true, 00:16:53.540 "dhchap_dhgroups": [ 00:16:53.540 "null", 00:16:53.540 "ffdhe2048", 00:16:53.540 "ffdhe3072", 00:16:53.540 "ffdhe4096", 00:16:53.540 "ffdhe6144", 00:16:53.540 "ffdhe8192" 00:16:53.540 ], 00:16:53.540 "dhchap_digests": [ 00:16:53.540 "sha256", 00:16:53.540 "sha384", 00:16:53.540 "sha512" 00:16:53.540 ], 00:16:53.540 "disable_auto_failback": false, 00:16:53.540 "fast_io_fail_timeout_sec": 0, 00:16:53.540 "generate_uuids": false, 00:16:53.540 "high_priority_weight": 0, 00:16:53.540 "io_path_stat": false, 00:16:53.540 "io_queue_requests": 0, 00:16:53.540 "keep_alive_timeout_ms": 10000, 00:16:53.540 "low_priority_weight": 0, 00:16:53.540 "medium_priority_weight": 0, 00:16:53.540 "nvme_adminq_poll_period_us": 10000, 00:16:53.540 "nvme_error_stat": false, 00:16:53.540 "nvme_ioq_poll_period_us": 0, 00:16:53.540 "rdma_cm_event_timeout_ms": 0, 00:16:53.540 "rdma_max_cq_size": 0, 00:16:53.540 "rdma_srq_size": 0, 00:16:53.540 "reconnect_delay_sec": 0, 00:16:53.540 "timeout_admin_us": 0, 00:16:53.540 "timeout_us": 0, 00:16:53.540 "transport_ack_timeout": 0, 00:16:53.540 "transport_retry_count": 4, 00:16:53.540 "transport_tos": 0 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "bdev_nvme_set_hotplug", 00:16:53.540 "params": { 00:16:53.540 "enable": false, 00:16:53.540 "period_us": 100000 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "bdev_malloc_create", 00:16:53.540 "params": { 00:16:53.540 "block_size": 4096, 00:16:53.540 "dif_is_head_of_md": false, 00:16:53.540 "dif_pi_format": 0, 00:16:53.540 "dif_type": 0, 00:16:53.540 "md_size": 0, 00:16:53.540 "name": "malloc0", 00:16:53.540 "num_blocks": 8192, 00:16:53.540 "optimal_io_boundary": 0, 00:16:53.540 "physical_block_size": 4096, 00:16:53.540 "uuid": "8507d28e-e807-41f7-8402-653b28c708f5" 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "bdev_wait_for_examine" 00:16:53.540 } 00:16:53.540 ] 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "subsystem": "nbd", 00:16:53.540 "config": [] 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "subsystem": "scheduler", 00:16:53.540 "config": [ 00:16:53.540 { 00:16:53.540 "method": "framework_set_scheduler", 00:16:53.540 "params": { 00:16:53.540 "name": "static" 00:16:53.540 } 00:16:53.540 } 00:16:53.540 ] 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "subsystem": "nvmf", 00:16:53.540 "config": [ 00:16:53.540 { 00:16:53.540 "method": "nvmf_set_config", 00:16:53.540 "params": { 00:16:53.540 "admin_cmd_passthru": { 00:16:53.540 "identify_ctrlr": false 00:16:53.540 }, 00:16:53.540 "dhchap_dhgroups": [ 00:16:53.540 "null", 00:16:53.540 "ffdhe2048", 00:16:53.540 "ffdhe3072", 00:16:53.540 "ffdhe4096", 00:16:53.540 "ffdhe6144", 00:16:53.540 "ffdhe8192" 00:16:53.540 ], 00:16:53.540 "dhchap_digests": [ 00:16:53.540 "sha256", 00:16:53.540 "sha384", 00:16:53.540 "sha512" 00:16:53.540 ], 00:16:53.540 "discovery_filter": "match_any" 00:16:53.540 } 00:16:53.540 }, 00:16:53.540 { 00:16:53.540 "method": "nvmf_set_max_subsystems", 00:16:53.540 "params": { 00:16:53.540 "max_subsystems": 1024 00:16:53.540 } 00:16:53.541 }, 00:16:53.541 { 00:16:53.541 "method": "nvmf_set_crdt", 00:16:53.541 "params": { 00:16:53.541 "crdt1": 0, 00:16:53.541 "crdt2": 0, 00:16:53.541 "crdt3": 0 00:16:53.541 } 00:16:53.541 }, 00:16:53.541 { 00:16:53.541 "method": "nvmf_create_transport", 00:16:53.541 "params": { 00:16:53.541 "abort_timeout_sec": 1, 00:16:53.541 "ack_timeout": 0, 00:16:53.541 "buf_cache_size": 4294967295, 00:16:53.541 "c2h_success": false, 00:16:53.541 "data_wr_pool_size": 0, 00:16:53.541 "dif_insert_or_strip": false, 00:16:53.541 "in_capsule_data_size": 4096, 00:16:53.541 "io_unit_size": 131072, 00:16:53.541 "max_aq_depth": 128, 00:16:53.541 "max_io_qpairs_per_ctrlr": 127, 00:16:53.541 "max_io_size": 131072, 00:16:53.541 "max_queue_depth": 128, 00:16:53.541 "num_shared_buffers": 511, 00:16:53.541 "sock_priority": 0, 00:16:53.541 "trtype": "TCP", 00:16:53.541 "zcopy": false 00:16:53.541 } 00:16:53.541 }, 00:16:53.541 { 00:16:53.541 "method": "nvmf_create_subsystem", 00:16:53.541 "params": { 00:16:53.541 "allow_any_host": false, 00:16:53.541 "ana_reporting": false, 00:16:53.541 "max_cntlid": 65519, 00:16:53.541 "max_namespaces": 10, 00:16:53.541 "min_cntlid": 1, 00:16:53.541 "model_number": "SPDK bdev Controller", 00:16:53.541 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.541 "serial_number": "SPDK00000000000001" 00:16:53.541 } 00:16:53.541 }, 00:16:53.541 { 00:16:53.541 "method": "nvmf_subsystem_add_host", 00:16:53.541 "params": { 00:16:53.541 "host": "nqn.2016-06.io.spdk:host1", 00:16:53.541 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.541 "psk": "key0" 00:16:53.541 } 00:16:53.541 }, 00:16:53.541 { 00:16:53.541 "method": "nvmf_subsystem_add_ns", 00:16:53.541 "params": { 00:16:53.541 "namespace": { 00:16:53.541 "bdev_name": "malloc0", 00:16:53.541 "nguid": "8507D28EE80741F78402653B28C708F5", 00:16:53.541 "no_auto_visible": false, 00:16:53.541 "nsid": 1, 00:16:53.541 "uuid": "8507d28e-e807-41f7-8402-653b28c708f5" 00:16:53.541 }, 00:16:53.541 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:53.541 } 00:16:53.541 }, 00:16:53.541 { 00:16:53.541 "method": "nvmf_subsystem_add_listener", 00:16:53.541 "params": { 00:16:53.541 "listen_address": { 00:16:53.541 "adrfam": "IPv4", 00:16:53.541 "traddr": "10.0.0.3", 00:16:53.541 "trsvcid": "4420", 00:16:53.541 "trtype": "TCP" 00:16:53.541 }, 00:16:53.541 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.541 "secure_channel": true 00:16:53.541 } 00:16:53.541 } 00:16:53.541 ] 00:16:53.541 } 00:16:53.541 ] 00:16:53.541 }' 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84478 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84478 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84478 ']' 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.541 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 [2024-11-20 15:24:39.803690] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:53.541 [2024-11-20 15:24:39.803795] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.541 [2024-11-20 15:24:39.942076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.800 [2024-11-20 15:24:40.022638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.800 [2024-11-20 15:24:40.022727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.800 [2024-11-20 15:24:40.022739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.800 [2024-11-20 15:24:40.022748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.800 [2024-11-20 15:24:40.022756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.800 [2024-11-20 15:24:40.023285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.059 [2024-11-20 15:24:40.304604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.059 [2024-11-20 15:24:40.336510] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:54.059 [2024-11-20 15:24:40.336826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:54.318 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.318 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:54.319 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:54.319 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:54.319 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84522 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84522 /var/tmp/bdevperf.sock 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84522 ']' 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:54.579 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:54.579 "subsystems": [ 00:16:54.579 { 00:16:54.579 "subsystem": "keyring", 00:16:54.579 "config": [ 00:16:54.579 { 00:16:54.579 "method": "keyring_file_add_key", 00:16:54.579 "params": { 00:16:54.579 "name": "key0", 00:16:54.579 "path": "/tmp/tmp.BdodikX28j" 00:16:54.579 } 00:16:54.579 } 00:16:54.579 ] 00:16:54.579 }, 00:16:54.579 { 00:16:54.579 "subsystem": "iobuf", 00:16:54.579 "config": [ 00:16:54.579 { 00:16:54.579 "method": "iobuf_set_options", 00:16:54.579 "params": { 00:16:54.579 "enable_numa": false, 00:16:54.579 "large_bufsize": 135168, 00:16:54.579 "large_pool_count": 1024, 00:16:54.579 "small_bufsize": 8192, 00:16:54.579 "small_pool_count": 8192 00:16:54.579 } 00:16:54.579 } 00:16:54.579 ] 00:16:54.579 }, 00:16:54.579 { 00:16:54.579 "subsystem": "sock", 00:16:54.579 "config": [ 00:16:54.579 { 00:16:54.579 "method": "sock_set_default_impl", 00:16:54.579 "params": { 00:16:54.579 "impl_name": "posix" 00:16:54.579 } 00:16:54.579 }, 00:16:54.579 { 00:16:54.579 "method": "sock_impl_set_options", 00:16:54.579 "params": { 00:16:54.579 "enable_ktls": false, 00:16:54.579 "enable_placement_id": 0, 00:16:54.579 "enable_quickack": false, 00:16:54.580 "enable_recv_pipe": true, 00:16:54.580 "enable_zerocopy_send_client": false, 00:16:54.580 "enable_zerocopy_send_server": true, 00:16:54.580 "impl_name": "ssl", 00:16:54.580 "recv_buf_size": 4096, 00:16:54.580 "send_buf_size": 4096, 00:16:54.580 "tls_version": 0, 00:16:54.580 "zerocopy_threshold": 0 00:16:54.580 } 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "method": "sock_impl_set_options", 00:16:54.580 "params": { 00:16:54.580 "enable_ktls": false, 00:16:54.580 "enable_placement_id": 0, 00:16:54.580 "enable_quickack": false, 00:16:54.580 "enable_recv_pipe": true, 00:16:54.580 "enable_zerocopy_send_client": false, 00:16:54.580 "enable_zerocopy_send_server": true, 00:16:54.580 "impl_name": "posix", 00:16:54.580 "recv_buf_size": 2097152, 00:16:54.580 "send_buf_size": 2097152, 00:16:54.580 "tls_version": 0, 00:16:54.580 "zerocopy_threshold": 0 00:16:54.580 } 00:16:54.580 } 00:16:54.580 ] 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "subsystem": "vmd", 00:16:54.580 "config": [] 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "subsystem": "accel", 00:16:54.580 "config": [ 00:16:54.580 { 00:16:54.580 "method": "accel_set_options", 00:16:54.580 "params": { 00:16:54.580 "buf_count": 2048, 00:16:54.580 "large_cache_size": 16, 00:16:54.580 "sequence_count": 2048, 00:16:54.580 "small_cache_size": 128, 00:16:54.580 "task_count": 2048 00:16:54.580 } 00:16:54.580 } 00:16:54.580 ] 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "subsystem": "bdev", 00:16:54.580 "config": [ 00:16:54.580 { 00:16:54.580 "method": "bdev_set_options", 00:16:54.580 "params": { 00:16:54.580 "bdev_auto_examine": true, 00:16:54.580 "bdev_io_cache_size": 256, 00:16:54.580 "bdev_io_pool_size": 65535, 00:16:54.580 "iobuf_large_cache_size": 16, 00:16:54.580 "iobuf_small_cache_size": 128 00:16:54.580 } 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "method": "bdev_raid_set_options", 00:16:54.580 "params": { 00:16:54.580 "process_max_bandwidth_mb_sec": 0, 00:16:54.580 "process_window_size_kb": 1024 00:16:54.580 } 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "method": "bdev_iscsi_set_options", 00:16:54.580 "params": { 00:16:54.580 "timeout_sec": 30 00:16:54.580 } 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "method": "bdev_nvme_set_options", 00:16:54.580 "params": { 00:16:54.580 "action_on_timeout": "none", 00:16:54.580 "allow_accel_sequence": false, 00:16:54.580 "arbitration_burst": 0, 00:16:54.580 "bdev_retry_count": 3, 00:16:54.580 "ctrlr_loss_timeout_sec": 0, 00:16:54.580 "delay_cmd_submit": true, 00:16:54.580 "dhchap_dhgroups": [ 00:16:54.580 "null", 00:16:54.580 "ffdhe2048", 00:16:54.580 "ffdhe3072", 00:16:54.580 "ffdhe4096", 00:16:54.580 "ffdhe6144", 00:16:54.580 "ffdhe8192" 00:16:54.580 ], 00:16:54.580 "dhchap_digests": [ 00:16:54.580 "sha256", 00:16:54.580 "sha384", 00:16:54.580 "sha512" 00:16:54.580 ], 00:16:54.580 "disable_auto_failback": false, 00:16:54.580 "fast_io_fail_timeout_sec": 0, 00:16:54.580 "generate_uuids": false, 00:16:54.580 "high_priority_weight": 0, 00:16:54.580 "io_path_stat": false, 00:16:54.580 "io_queue_requests": 512, 00:16:54.580 "keep_alive_timeout_ms": 10000, 00:16:54.580 "low_priority_weight": 0, 00:16:54.580 "medium_priority_weight": 0, 00:16:54.580 "nvme_adminq_poll_period_us": 10000, 00:16:54.580 "nvme_error_stat": false, 00:16:54.580 "nvme_ioq_poll_period_us": 0, 00:16:54.580 "rdma_cm_event_timeout_ms": 0, 00:16:54.580 "rdma_max_cq_size": 0, 00:16:54.580 "rdma_srq_size": 0, 00:16:54.580 "reconnect_delay_sec": 0, 00:16:54.580 "timeout_admin_us": 0, 00:16:54.580 "timeout_us": 0, 00:16:54.580 "transport_ack_timeout": 0, 00:16:54.580 "transport_retry_count": 4, 00:16:54.580 "transport_tos": 0 00:16:54.580 } 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "method": "bdev_nvme_attach_controller", 00:16:54.580 "params": { 00:16:54.580 "adrfam": "IPv4", 00:16:54.580 "ctrlr_loss_timeout_sec": 0, 00:16:54.580 "ddgst": false, 00:16:54.580 "fast_io_fail_timeout_sec": 0, 00:16:54.580 "hdgst": false, 00:16:54.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.580 "multipath": "multipath", 00:16:54.580 "name": "TLSTEST", 00:16:54.580 "prchk_guard": false, 00:16:54.580 "prchk_reftag": false, 00:16:54.580 "psk": "key0", 00:16:54.580 "reconnect_delay_sec": 0, 00:16:54.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.580 "traddr": "10.0.0.3", 00:16:54.580 "trsvcid": "4420", 00:16:54.580 "trtype": "TCP" 00:16:54.580 } 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "method": "bdev_nvme_set_hotplug", 00:16:54.580 "params": { 00:16:54.580 "enable": false, 00:16:54.580 "period_us": 100000 00:16:54.580 } 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "method": "bdev_wait_for_examine" 00:16:54.580 } 00:16:54.580 ] 00:16:54.580 }, 00:16:54.580 { 00:16:54.580 "subsystem": "nbd", 00:16:54.580 "config": [] 00:16:54.580 } 00:16:54.580 ] 00:16:54.580 }' 00:16:54.580 [2024-11-20 15:24:40.866122] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:16:54.580 [2024-11-20 15:24:40.866250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84522 ] 00:16:54.580 [2024-11-20 15:24:41.018637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.840 [2024-11-20 15:24:41.102986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.098 [2024-11-20 15:24:41.320425] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:55.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:55.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:55.667 Running I/O for 10 seconds... 00:16:57.538 5212.00 IOPS, 20.36 MiB/s [2024-11-20T15:24:44.958Z] 5229.00 IOPS, 20.43 MiB/s [2024-11-20T15:24:46.338Z] 5244.33 IOPS, 20.49 MiB/s [2024-11-20T15:24:47.288Z] 5251.00 IOPS, 20.51 MiB/s [2024-11-20T15:24:48.224Z] 5252.80 IOPS, 20.52 MiB/s [2024-11-20T15:24:49.158Z] 5248.67 IOPS, 20.50 MiB/s [2024-11-20T15:24:50.098Z] 5256.00 IOPS, 20.53 MiB/s [2024-11-20T15:24:51.036Z] 5258.50 IOPS, 20.54 MiB/s [2024-11-20T15:24:51.974Z] 5245.56 IOPS, 20.49 MiB/s [2024-11-20T15:24:51.974Z] 5248.20 IOPS, 20.50 MiB/s 00:17:05.492 Latency(us) 00:17:05.492 [2024-11-20T15:24:51.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.492 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:05.492 Verification LBA range: start 0x0 length 0x2000 00:17:05.492 TLSTESTn1 : 10.01 5254.50 20.53 0.00 0.00 24324.52 3684.76 19581.84 00:17:05.492 [2024-11-20T15:24:51.974Z] =================================================================================================================== 00:17:05.492 [2024-11-20T15:24:51.974Z] Total : 5254.50 20.53 0.00 0.00 24324.52 3684.76 19581.84 00:17:05.492 { 00:17:05.492 "results": [ 00:17:05.492 { 00:17:05.492 "job": "TLSTESTn1", 00:17:05.492 "core_mask": "0x4", 00:17:05.492 "workload": "verify", 00:17:05.492 "status": "finished", 00:17:05.492 "verify_range": { 00:17:05.492 "start": 0, 00:17:05.492 "length": 8192 00:17:05.492 }, 00:17:05.492 "queue_depth": 128, 00:17:05.492 "io_size": 4096, 00:17:05.492 "runtime": 10.011602, 00:17:05.492 "iops": 5254.503724778512, 00:17:05.492 "mibps": 20.525405174916063, 00:17:05.492 "io_failed": 0, 00:17:05.492 "io_timeout": 0, 00:17:05.492 "avg_latency_us": 24324.51863688644, 00:17:05.493 "min_latency_us": 3684.755020080321, 00:17:05.493 "max_latency_us": 19581.840963855422 00:17:05.493 } 00:17:05.493 ], 00:17:05.493 "core_count": 1 00:17:05.493 } 00:17:05.493 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.493 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84522 00:17:05.493 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84522 ']' 00:17:05.493 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84522 00:17:05.493 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:05.493 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.493 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84522 00:17:05.752 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:05.752 killing process with pid 84522 00:17:05.752 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.752 00:17:05.752 Latency(us) 00:17:05.752 [2024-11-20T15:24:52.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.752 [2024-11-20T15:24:52.234Z] =================================================================================================================== 00:17:05.752 [2024-11-20T15:24:52.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.752 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:05.753 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84522' 00:17:05.753 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84522 00:17:05.753 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84522 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84478 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84478 ']' 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84478 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84478 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:06.012 killing process with pid 84478 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84478' 00:17:06.012 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84478 00:17:06.013 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84478 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84680 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84680 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84680 ']' 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.273 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.273 [2024-11-20 15:24:52.669211] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:06.273 [2024-11-20 15:24:52.669323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.533 [2024-11-20 15:24:52.806088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.533 [2024-11-20 15:24:52.888172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.533 [2024-11-20 15:24:52.888251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.533 [2024-11-20 15:24:52.888262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.533 [2024-11-20 15:24:52.888271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.533 [2024-11-20 15:24:52.888278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.533 [2024-11-20 15:24:52.888699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.BdodikX28j 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BdodikX28j 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:07.484 [2024-11-20 15:24:53.896254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.484 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:07.743 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:08.002 [2024-11-20 15:24:54.379601] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.002 [2024-11-20 15:24:54.379896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.002 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:08.260 malloc0 00:17:08.260 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:08.519 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:17:08.778 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84785 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84785 /var/tmp/bdevperf.sock 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84785 ']' 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.037 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:09.037 [2024-11-20 15:24:55.457189] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:09.037 [2024-11-20 15:24:55.457968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84785 ] 00:17:09.297 [2024-11-20 15:24:55.621147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.297 [2024-11-20 15:24:55.703634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.237 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.237 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:10.237 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:17:10.237 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:10.496 [2024-11-20 15:24:56.846519] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.496 nvme0n1 00:17:10.496 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:10.755 Running I/O for 1 seconds... 00:17:11.691 5551.00 IOPS, 21.68 MiB/s 00:17:11.691 Latency(us) 00:17:11.691 [2024-11-20T15:24:58.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.691 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:11.691 Verification LBA range: start 0x0 length 0x2000 00:17:11.691 nvme0n1 : 1.01 5611.24 21.92 0.00 0.00 22654.44 4474.35 16844.59 00:17:11.691 [2024-11-20T15:24:58.173Z] =================================================================================================================== 00:17:11.691 [2024-11-20T15:24:58.173Z] Total : 5611.24 21.92 0.00 0.00 22654.44 4474.35 16844.59 00:17:11.691 { 00:17:11.691 "results": [ 00:17:11.691 { 00:17:11.691 "job": "nvme0n1", 00:17:11.691 "core_mask": "0x2", 00:17:11.691 "workload": "verify", 00:17:11.691 "status": "finished", 00:17:11.691 "verify_range": { 00:17:11.691 "start": 0, 00:17:11.691 "length": 8192 00:17:11.691 }, 00:17:11.691 "queue_depth": 128, 00:17:11.691 "io_size": 4096, 00:17:11.691 "runtime": 1.012254, 00:17:11.691 "iops": 5611.23986667378, 00:17:11.691 "mibps": 21.918905729194453, 00:17:11.691 "io_failed": 0, 00:17:11.691 "io_timeout": 0, 00:17:11.691 "avg_latency_us": 22654.443350868263, 00:17:11.691 "min_latency_us": 4474.345381526105, 00:17:11.691 "max_latency_us": 16844.59437751004 00:17:11.691 } 00:17:11.691 ], 00:17:11.691 "core_count": 1 00:17:11.691 } 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84785 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84785 ']' 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84785 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84785 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:11.691 killing process with pid 84785 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84785' 00:17:11.691 Received shutdown signal, test time was about 1.000000 seconds 00:17:11.691 00:17:11.691 Latency(us) 00:17:11.691 [2024-11-20T15:24:58.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.691 [2024-11-20T15:24:58.173Z] =================================================================================================================== 00:17:11.691 [2024-11-20T15:24:58.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84785 00:17:11.691 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84785 00:17:11.951 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84680 00:17:11.951 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84680 ']' 00:17:11.951 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84680 00:17:11.951 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:11.951 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.951 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84680 00:17:12.210 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.210 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.210 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84680' 00:17:12.210 killing process with pid 84680 00:17:12.210 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84680 00:17:12.210 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84680 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84866 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84866 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84866 ']' 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.469 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:12.469 [2024-11-20 15:24:58.788095] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:12.469 [2024-11-20 15:24:58.788205] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.469 [2024-11-20 15:24:58.944893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.755 [2024-11-20 15:24:59.026330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.755 [2024-11-20 15:24:59.026411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.755 [2024-11-20 15:24:59.026422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.755 [2024-11-20 15:24:59.026431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.755 [2024-11-20 15:24:59.026439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.755 [2024-11-20 15:24:59.026842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.328 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.328 [2024-11-20 15:24:59.786549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.587 malloc0 00:17:13.587 [2024-11-20 15:24:59.825653] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:13.587 [2024-11-20 15:24:59.826013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84916 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84916 /var/tmp/bdevperf.sock 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84916 ']' 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.587 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.587 [2024-11-20 15:24:59.910652] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:13.587 [2024-11-20 15:24:59.910767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84916 ] 00:17:13.587 [2024-11-20 15:25:00.062595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.846 [2024-11-20 15:25:00.146603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.783 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.783 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:14.783 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BdodikX28j 00:17:14.783 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:15.042 [2024-11-20 15:25:01.413740] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:15.042 nvme0n1 00:17:15.042 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.300 Running I/O for 1 seconds... 00:17:16.238 5554.00 IOPS, 21.70 MiB/s 00:17:16.238 Latency(us) 00:17:16.238 [2024-11-20T15:25:02.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.238 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:16.238 Verification LBA range: start 0x0 length 0x2000 00:17:16.238 nvme0n1 : 1.01 5613.04 21.93 0.00 0.00 22643.93 4790.18 16528.76 00:17:16.238 [2024-11-20T15:25:02.720Z] =================================================================================================================== 00:17:16.238 [2024-11-20T15:25:02.720Z] Total : 5613.04 21.93 0.00 0.00 22643.93 4790.18 16528.76 00:17:16.238 { 00:17:16.238 "results": [ 00:17:16.238 { 00:17:16.238 "job": "nvme0n1", 00:17:16.238 "core_mask": "0x2", 00:17:16.238 "workload": "verify", 00:17:16.238 "status": "finished", 00:17:16.238 "verify_range": { 00:17:16.238 "start": 0, 00:17:16.238 "length": 8192 00:17:16.238 }, 00:17:16.238 "queue_depth": 128, 00:17:16.238 "io_size": 4096, 00:17:16.238 "runtime": 1.012285, 00:17:16.238 "iops": 5613.043757439851, 00:17:16.238 "mibps": 21.92595217749942, 00:17:16.238 "io_failed": 0, 00:17:16.238 "io_timeout": 0, 00:17:16.238 "avg_latency_us": 22643.926210721096, 00:17:16.238 "min_latency_us": 4790.181526104418, 00:17:16.238 "max_latency_us": 16528.758232931727 00:17:16.238 } 00:17:16.238 ], 00:17:16.238 "core_count": 1 00:17:16.238 } 00:17:16.238 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:16.238 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.238 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.498 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.498 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:16.498 "subsystems": [ 00:17:16.498 { 00:17:16.498 "subsystem": "keyring", 00:17:16.498 "config": [ 00:17:16.498 { 00:17:16.498 "method": "keyring_file_add_key", 00:17:16.498 "params": { 00:17:16.498 "name": "key0", 00:17:16.498 "path": "/tmp/tmp.BdodikX28j" 00:17:16.498 } 00:17:16.498 } 00:17:16.498 ] 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "subsystem": "iobuf", 00:17:16.498 "config": [ 00:17:16.498 { 00:17:16.498 "method": "iobuf_set_options", 00:17:16.498 "params": { 00:17:16.498 "enable_numa": false, 00:17:16.498 "large_bufsize": 135168, 00:17:16.498 "large_pool_count": 1024, 00:17:16.498 "small_bufsize": 8192, 00:17:16.498 "small_pool_count": 8192 00:17:16.498 } 00:17:16.498 } 00:17:16.498 ] 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "subsystem": "sock", 00:17:16.498 "config": [ 00:17:16.498 { 00:17:16.498 "method": "sock_set_default_impl", 00:17:16.498 "params": { 00:17:16.498 "impl_name": "posix" 00:17:16.498 } 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "method": "sock_impl_set_options", 00:17:16.498 "params": { 00:17:16.498 "enable_ktls": false, 00:17:16.498 "enable_placement_id": 0, 00:17:16.498 "enable_quickack": false, 00:17:16.498 "enable_recv_pipe": true, 00:17:16.498 "enable_zerocopy_send_client": false, 00:17:16.498 "enable_zerocopy_send_server": true, 00:17:16.498 "impl_name": "ssl", 00:17:16.498 "recv_buf_size": 4096, 00:17:16.498 "send_buf_size": 4096, 00:17:16.498 "tls_version": 0, 00:17:16.498 "zerocopy_threshold": 0 00:17:16.498 } 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "method": "sock_impl_set_options", 00:17:16.498 "params": { 00:17:16.498 "enable_ktls": false, 00:17:16.498 "enable_placement_id": 0, 00:17:16.498 "enable_quickack": false, 00:17:16.498 "enable_recv_pipe": true, 00:17:16.498 "enable_zerocopy_send_client": false, 00:17:16.498 "enable_zerocopy_send_server": true, 00:17:16.498 "impl_name": "posix", 00:17:16.498 "recv_buf_size": 2097152, 00:17:16.498 "send_buf_size": 2097152, 00:17:16.498 "tls_version": 0, 00:17:16.498 "zerocopy_threshold": 0 00:17:16.498 } 00:17:16.498 } 00:17:16.498 ] 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "subsystem": "vmd", 00:17:16.498 "config": [] 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "subsystem": "accel", 00:17:16.498 "config": [ 00:17:16.498 { 00:17:16.498 "method": "accel_set_options", 00:17:16.498 "params": { 00:17:16.498 "buf_count": 2048, 00:17:16.498 "large_cache_size": 16, 00:17:16.498 "sequence_count": 2048, 00:17:16.498 "small_cache_size": 128, 00:17:16.498 "task_count": 2048 00:17:16.498 } 00:17:16.498 } 00:17:16.498 ] 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "subsystem": "bdev", 00:17:16.498 "config": [ 00:17:16.498 { 00:17:16.498 "method": "bdev_set_options", 00:17:16.498 "params": { 00:17:16.498 "bdev_auto_examine": true, 00:17:16.498 "bdev_io_cache_size": 256, 00:17:16.498 "bdev_io_pool_size": 65535, 00:17:16.498 "iobuf_large_cache_size": 16, 00:17:16.498 "iobuf_small_cache_size": 128 00:17:16.498 } 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "method": "bdev_raid_set_options", 00:17:16.498 "params": { 00:17:16.498 "process_max_bandwidth_mb_sec": 0, 00:17:16.498 "process_window_size_kb": 1024 00:17:16.498 } 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "method": "bdev_iscsi_set_options", 00:17:16.498 "params": { 00:17:16.498 "timeout_sec": 30 00:17:16.498 } 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "method": "bdev_nvme_set_options", 00:17:16.498 "params": { 00:17:16.498 "action_on_timeout": "none", 00:17:16.498 "allow_accel_sequence": false, 00:17:16.498 "arbitration_burst": 0, 00:17:16.498 "bdev_retry_count": 3, 00:17:16.498 "ctrlr_loss_timeout_sec": 0, 00:17:16.498 "delay_cmd_submit": true, 00:17:16.498 "dhchap_dhgroups": [ 00:17:16.498 "null", 00:17:16.498 "ffdhe2048", 00:17:16.498 "ffdhe3072", 00:17:16.498 "ffdhe4096", 00:17:16.498 "ffdhe6144", 00:17:16.498 "ffdhe8192" 00:17:16.498 ], 00:17:16.498 "dhchap_digests": [ 00:17:16.498 "sha256", 00:17:16.498 "sha384", 00:17:16.498 "sha512" 00:17:16.498 ], 00:17:16.498 "disable_auto_failback": false, 00:17:16.498 "fast_io_fail_timeout_sec": 0, 00:17:16.498 "generate_uuids": false, 00:17:16.498 "high_priority_weight": 0, 00:17:16.498 "io_path_stat": false, 00:17:16.498 "io_queue_requests": 0, 00:17:16.498 "keep_alive_timeout_ms": 10000, 00:17:16.498 "low_priority_weight": 0, 00:17:16.498 "medium_priority_weight": 0, 00:17:16.498 "nvme_adminq_poll_period_us": 10000, 00:17:16.498 "nvme_error_stat": false, 00:17:16.498 "nvme_ioq_poll_period_us": 0, 00:17:16.498 "rdma_cm_event_timeout_ms": 0, 00:17:16.498 "rdma_max_cq_size": 0, 00:17:16.498 "rdma_srq_size": 0, 00:17:16.498 "reconnect_delay_sec": 0, 00:17:16.498 "timeout_admin_us": 0, 00:17:16.498 "timeout_us": 0, 00:17:16.498 "transport_ack_timeout": 0, 00:17:16.498 "transport_retry_count": 4, 00:17:16.498 "transport_tos": 0 00:17:16.498 } 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "method": "bdev_nvme_set_hotplug", 00:17:16.498 "params": { 00:17:16.498 "enable": false, 00:17:16.498 "period_us": 100000 00:17:16.498 } 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "method": "bdev_malloc_create", 00:17:16.498 "params": { 00:17:16.498 "block_size": 4096, 00:17:16.498 "dif_is_head_of_md": false, 00:17:16.498 "dif_pi_format": 0, 00:17:16.498 "dif_type": 0, 00:17:16.498 "md_size": 0, 00:17:16.498 "name": "malloc0", 00:17:16.498 "num_blocks": 8192, 00:17:16.498 "optimal_io_boundary": 0, 00:17:16.498 "physical_block_size": 4096, 00:17:16.498 "uuid": "c888605f-2664-4474-b983-e30386b985a8" 00:17:16.498 } 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "method": "bdev_wait_for_examine" 00:17:16.498 } 00:17:16.498 ] 00:17:16.498 }, 00:17:16.498 { 00:17:16.498 "subsystem": "nbd", 00:17:16.498 "config": [] 00:17:16.498 }, 00:17:16.499 { 00:17:16.499 "subsystem": "scheduler", 00:17:16.499 "config": [ 00:17:16.499 { 00:17:16.499 "method": "framework_set_scheduler", 00:17:16.499 "params": { 00:17:16.499 "name": "static" 00:17:16.499 } 00:17:16.499 } 00:17:16.499 ] 00:17:16.499 }, 00:17:16.499 { 00:17:16.499 "subsystem": "nvmf", 00:17:16.499 "config": [ 00:17:16.499 { 00:17:16.499 "method": "nvmf_set_config", 00:17:16.499 "params": { 00:17:16.499 "admin_cmd_passthru": { 00:17:16.499 "identify_ctrlr": false 00:17:16.499 }, 00:17:16.499 "dhchap_dhgroups": [ 00:17:16.499 "null", 00:17:16.499 "ffdhe2048", 00:17:16.499 "ffdhe3072", 00:17:16.499 "ffdhe4096", 00:17:16.499 "ffdhe6144", 00:17:16.499 "ffdhe8192" 00:17:16.499 ], 00:17:16.499 "dhchap_digests": [ 00:17:16.499 "sha256", 00:17:16.499 "sha384", 00:17:16.499 "sha512" 00:17:16.499 ], 00:17:16.499 "discovery_filter": "match_any" 00:17:16.499 } 00:17:16.499 }, 00:17:16.499 { 00:17:16.499 "method": "nvmf_set_max_subsystems", 00:17:16.499 "params": { 00:17:16.499 "max_subsystems": 1024 00:17:16.499 } 00:17:16.499 }, 00:17:16.499 { 00:17:16.499 "method": "nvmf_set_crdt", 00:17:16.499 "params": { 00:17:16.499 "crdt1": 0, 00:17:16.499 "crdt2": 0, 00:17:16.499 "crdt3": 0 00:17:16.499 } 00:17:16.499 }, 00:17:16.499 { 00:17:16.499 "method": "nvmf_create_transport", 00:17:16.499 "params": { 00:17:16.499 "abort_timeout_sec": 1, 00:17:16.499 "ack_timeout": 0, 00:17:16.499 "buf_cache_size": 4294967295, 00:17:16.499 "c2h_success": false, 00:17:16.499 "data_wr_pool_size": 0, 00:17:16.499 "dif_insert_or_strip": false, 00:17:16.499 "in_capsule_data_size": 4096, 00:17:16.499 "io_unit_size": 131072, 00:17:16.499 "max_aq_depth": 128, 00:17:16.499 "max_io_qpairs_per_ctrlr": 127, 00:17:16.499 "max_io_size": 131072, 00:17:16.499 "max_queue_depth": 128, 00:17:16.499 "num_shared_buffers": 511, 00:17:16.499 "sock_priority": 0, 00:17:16.499 "trtype": "TCP", 00:17:16.499 "zcopy": false 00:17:16.499 } 00:17:16.499 }, 00:17:16.499 { 00:17:16.499 "method": "nvmf_create_subsystem", 00:17:16.499 "params": { 00:17:16.499 "allow_any_host": false, 00:17:16.499 "ana_reporting": false, 00:17:16.499 "max_cntlid": 65519, 00:17:16.499 "max_namespaces": 32, 00:17:16.499 "min_cntlid": 1, 00:17:16.499 "model_number": "SPDK bdev Controller", 00:17:16.499 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.499 "serial_number": "00000000000000000000" 00:17:16.499 } 00:17:16.499 }, 00:17:16.499 { 00:17:16.499 "method": "nvmf_subsystem_add_host", 00:17:16.499 "params": { 00:17:16.499 "host": "nqn.2016-06.io.spdk:host1", 00:17:16.499 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.499 "psk": "key0" 00:17:16.499 } 00:17:16.499 }, 00:17:16.499 { 00:17:16.499 "method": "nvmf_subsystem_add_ns", 00:17:16.499 "params": { 00:17:16.499 "namespace": { 00:17:16.499 "bdev_name": "malloc0", 00:17:16.499 "nguid": "C888605F26644474B983E30386B985A8", 00:17:16.499 "no_auto_visible": false, 00:17:16.499 "nsid": 1, 00:17:16.499 "uuid": "c888605f-2664-4474-b983-e30386b985a8" 00:17:16.499 }, 00:17:16.499 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:16.499 } 00:17:16.499 }, 00:17:16.499 { 00:17:16.499 "method": "nvmf_subsystem_add_listener", 00:17:16.499 "params": { 00:17:16.499 "listen_address": { 00:17:16.499 "adrfam": "IPv4", 00:17:16.499 "traddr": "10.0.0.3", 00:17:16.499 "trsvcid": "4420", 00:17:16.499 "trtype": "TCP" 00:17:16.499 }, 00:17:16.499 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.499 "secure_channel": false, 00:17:16.499 "sock_impl": "ssl" 00:17:16.499 } 00:17:16.499 } 00:17:16.499 ] 00:17:16.499 } 00:17:16.499 ] 00:17:16.499 }' 00:17:16.499 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:16.759 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:16.759 "subsystems": [ 00:17:16.759 { 00:17:16.759 "subsystem": "keyring", 00:17:16.759 "config": [ 00:17:16.759 { 00:17:16.759 "method": "keyring_file_add_key", 00:17:16.759 "params": { 00:17:16.759 "name": "key0", 00:17:16.759 "path": "/tmp/tmp.BdodikX28j" 00:17:16.759 } 00:17:16.759 } 00:17:16.759 ] 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "subsystem": "iobuf", 00:17:16.759 "config": [ 00:17:16.759 { 00:17:16.759 "method": "iobuf_set_options", 00:17:16.759 "params": { 00:17:16.759 "enable_numa": false, 00:17:16.759 "large_bufsize": 135168, 00:17:16.759 "large_pool_count": 1024, 00:17:16.759 "small_bufsize": 8192, 00:17:16.759 "small_pool_count": 8192 00:17:16.759 } 00:17:16.759 } 00:17:16.759 ] 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "subsystem": "sock", 00:17:16.759 "config": [ 00:17:16.759 { 00:17:16.759 "method": "sock_set_default_impl", 00:17:16.759 "params": { 00:17:16.759 "impl_name": "posix" 00:17:16.759 } 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "method": "sock_impl_set_options", 00:17:16.759 "params": { 00:17:16.759 "enable_ktls": false, 00:17:16.759 "enable_placement_id": 0, 00:17:16.759 "enable_quickack": false, 00:17:16.759 "enable_recv_pipe": true, 00:17:16.759 "enable_zerocopy_send_client": false, 00:17:16.759 "enable_zerocopy_send_server": true, 00:17:16.759 "impl_name": "ssl", 00:17:16.759 "recv_buf_size": 4096, 00:17:16.759 "send_buf_size": 4096, 00:17:16.759 "tls_version": 0, 00:17:16.759 "zerocopy_threshold": 0 00:17:16.759 } 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "method": "sock_impl_set_options", 00:17:16.759 "params": { 00:17:16.759 "enable_ktls": false, 00:17:16.759 "enable_placement_id": 0, 00:17:16.759 "enable_quickack": false, 00:17:16.759 "enable_recv_pipe": true, 00:17:16.759 "enable_zerocopy_send_client": false, 00:17:16.759 "enable_zerocopy_send_server": true, 00:17:16.759 "impl_name": "posix", 00:17:16.759 "recv_buf_size": 2097152, 00:17:16.759 "send_buf_size": 2097152, 00:17:16.759 "tls_version": 0, 00:17:16.759 "zerocopy_threshold": 0 00:17:16.759 } 00:17:16.759 } 00:17:16.759 ] 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "subsystem": "vmd", 00:17:16.759 "config": [] 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "subsystem": "accel", 00:17:16.759 "config": [ 00:17:16.759 { 00:17:16.759 "method": "accel_set_options", 00:17:16.759 "params": { 00:17:16.759 "buf_count": 2048, 00:17:16.759 "large_cache_size": 16, 00:17:16.759 "sequence_count": 2048, 00:17:16.759 "small_cache_size": 128, 00:17:16.759 "task_count": 2048 00:17:16.759 } 00:17:16.759 } 00:17:16.759 ] 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "subsystem": "bdev", 00:17:16.759 "config": [ 00:17:16.759 { 00:17:16.759 "method": "bdev_set_options", 00:17:16.759 "params": { 00:17:16.759 "bdev_auto_examine": true, 00:17:16.759 "bdev_io_cache_size": 256, 00:17:16.759 "bdev_io_pool_size": 65535, 00:17:16.759 "iobuf_large_cache_size": 16, 00:17:16.759 "iobuf_small_cache_size": 128 00:17:16.759 } 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "method": "bdev_raid_set_options", 00:17:16.759 "params": { 00:17:16.759 "process_max_bandwidth_mb_sec": 0, 00:17:16.759 "process_window_size_kb": 1024 00:17:16.759 } 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "method": "bdev_iscsi_set_options", 00:17:16.759 "params": { 00:17:16.759 "timeout_sec": 30 00:17:16.759 } 00:17:16.759 }, 00:17:16.759 { 00:17:16.759 "method": "bdev_nvme_set_options", 00:17:16.759 "params": { 00:17:16.759 "action_on_timeout": "none", 00:17:16.759 "allow_accel_sequence": false, 00:17:16.759 "arbitration_burst": 0, 00:17:16.759 "bdev_retry_count": 3, 00:17:16.759 "ctrlr_loss_timeout_sec": 0, 00:17:16.759 "delay_cmd_submit": true, 00:17:16.759 "dhchap_dhgroups": [ 00:17:16.759 "null", 00:17:16.759 "ffdhe2048", 00:17:16.759 "ffdhe3072", 00:17:16.759 "ffdhe4096", 00:17:16.759 "ffdhe6144", 00:17:16.759 "ffdhe8192" 00:17:16.759 ], 00:17:16.759 "dhchap_digests": [ 00:17:16.759 "sha256", 00:17:16.759 "sha384", 00:17:16.759 "sha512" 00:17:16.759 ], 00:17:16.759 "disable_auto_failback": false, 00:17:16.759 "fast_io_fail_timeout_sec": 0, 00:17:16.759 "generate_uuids": false, 00:17:16.759 "high_priority_weight": 0, 00:17:16.759 "io_path_stat": false, 00:17:16.759 "io_queue_requests": 512, 00:17:16.759 "keep_alive_timeout_ms": 10000, 00:17:16.759 "low_priority_weight": 0, 00:17:16.759 "medium_priority_weight": 0, 00:17:16.759 "nvme_adminq_poll_period_us": 10000, 00:17:16.759 "nvme_error_stat": false, 00:17:16.759 "nvme_ioq_poll_period_us": 0, 00:17:16.759 "rdma_cm_event_timeout_ms": 0, 00:17:16.760 "rdma_max_cq_size": 0, 00:17:16.760 "rdma_srq_size": 0, 00:17:16.760 "reconnect_delay_sec": 0, 00:17:16.760 "timeout_admin_us": 0, 00:17:16.760 "timeout_us": 0, 00:17:16.760 "transport_ack_timeout": 0, 00:17:16.760 "transport_retry_count": 4, 00:17:16.760 "transport_tos": 0 00:17:16.760 } 00:17:16.760 }, 00:17:16.760 { 00:17:16.760 "method": "bdev_nvme_attach_controller", 00:17:16.760 "params": { 00:17:16.760 "adrfam": "IPv4", 00:17:16.760 "ctrlr_loss_timeout_sec": 0, 00:17:16.760 "ddgst": false, 00:17:16.760 "fast_io_fail_timeout_sec": 0, 00:17:16.760 "hdgst": false, 00:17:16.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.760 "multipath": "multipath", 00:17:16.760 "name": "nvme0", 00:17:16.760 "prchk_guard": false, 00:17:16.760 "prchk_reftag": false, 00:17:16.760 "psk": "key0", 00:17:16.760 "reconnect_delay_sec": 0, 00:17:16.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.760 "traddr": "10.0.0.3", 00:17:16.760 "trsvcid": "4420", 00:17:16.760 "trtype": "TCP" 00:17:16.760 } 00:17:16.760 }, 00:17:16.760 { 00:17:16.760 "method": "bdev_nvme_set_hotplug", 00:17:16.760 "params": { 00:17:16.760 "enable": false, 00:17:16.760 "period_us": 100000 00:17:16.760 } 00:17:16.760 }, 00:17:16.760 { 00:17:16.760 "method": "bdev_enable_histogram", 00:17:16.760 "params": { 00:17:16.760 "enable": true, 00:17:16.760 "name": "nvme0n1" 00:17:16.760 } 00:17:16.760 }, 00:17:16.760 { 00:17:16.760 "method": "bdev_wait_for_examine" 00:17:16.760 } 00:17:16.760 ] 00:17:16.760 }, 00:17:16.760 { 00:17:16.760 "subsystem": "nbd", 00:17:16.760 "config": [] 00:17:16.760 } 00:17:16.760 ] 00:17:16.760 }' 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84916 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84916 ']' 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84916 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84916 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:16.760 killing process with pid 84916 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84916' 00:17:16.760 Received shutdown signal, test time was about 1.000000 seconds 00:17:16.760 00:17:16.760 Latency(us) 00:17:16.760 [2024-11-20T15:25:03.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.760 [2024-11-20T15:25:03.242Z] =================================================================================================================== 00:17:16.760 [2024-11-20T15:25:03.242Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84916 00:17:16.760 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84916 00:17:17.020 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84866 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84866 ']' 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84866 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84866 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.021 killing process with pid 84866 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84866' 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84866 00:17:17.021 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84866 00:17:17.280 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:17.280 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.280 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.280 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.280 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:17.280 "subsystems": [ 00:17:17.280 { 00:17:17.280 "subsystem": "keyring", 00:17:17.280 "config": [ 00:17:17.280 { 00:17:17.280 "method": "keyring_file_add_key", 00:17:17.280 "params": { 00:17:17.280 "name": "key0", 00:17:17.280 "path": "/tmp/tmp.BdodikX28j" 00:17:17.280 } 00:17:17.280 } 00:17:17.280 ] 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "subsystem": "iobuf", 00:17:17.280 "config": [ 00:17:17.280 { 00:17:17.280 "method": "iobuf_set_options", 00:17:17.280 "params": { 00:17:17.280 "enable_numa": false, 00:17:17.280 "large_bufsize": 135168, 00:17:17.280 "large_pool_count": 1024, 00:17:17.280 "small_bufsize": 8192, 00:17:17.280 "small_pool_count": 8192 00:17:17.280 } 00:17:17.280 } 00:17:17.280 ] 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "subsystem": "sock", 00:17:17.280 "config": [ 00:17:17.280 { 00:17:17.280 "method": "sock_set_default_impl", 00:17:17.280 "params": { 00:17:17.280 "impl_name": "posix" 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "sock_impl_set_options", 00:17:17.280 "params": { 00:17:17.280 "enable_ktls": false, 00:17:17.280 "enable_placement_id": 0, 00:17:17.280 "enable_quickack": false, 00:17:17.280 "enable_recv_pipe": true, 00:17:17.280 "enable_zerocopy_send_client": false, 00:17:17.280 "enable_zerocopy_send_server": true, 00:17:17.280 "impl_name": "ssl", 00:17:17.280 "recv_buf_size": 4096, 00:17:17.280 "send_buf_size": 4096, 00:17:17.280 "tls_version": 0, 00:17:17.280 "zerocopy_threshold": 0 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "sock_impl_set_options", 00:17:17.280 "params": { 00:17:17.280 "enable_ktls": false, 00:17:17.280 "enable_placement_id": 0, 00:17:17.280 "enable_quickack": false, 00:17:17.280 "enable_recv_pipe": true, 00:17:17.280 "enable_zerocopy_send_client": false, 00:17:17.280 "enable_zerocopy_send_server": true, 00:17:17.280 "impl_name": "posix", 00:17:17.280 "recv_buf_size": 2097152, 00:17:17.280 "send_buf_size": 2097152, 00:17:17.280 "tls_version": 0, 00:17:17.280 "zerocopy_threshold": 0 00:17:17.280 } 00:17:17.280 } 00:17:17.280 ] 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "subsystem": "vmd", 00:17:17.280 "config": [] 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "subsystem": "accel", 00:17:17.280 "config": [ 00:17:17.280 { 00:17:17.280 "method": "accel_set_options", 00:17:17.280 "params": { 00:17:17.280 "buf_count": 2048, 00:17:17.280 "large_cache_size": 16, 00:17:17.280 "sequence_count": 2048, 00:17:17.280 "small_cache_size": 128, 00:17:17.280 "task_count": 2048 00:17:17.280 } 00:17:17.280 } 00:17:17.280 ] 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "subsystem": "bdev", 00:17:17.280 "config": [ 00:17:17.280 { 00:17:17.280 "method": "bdev_set_options", 00:17:17.280 "params": { 00:17:17.280 "bdev_auto_examine": true, 00:17:17.280 "bdev_io_cache_size": 256, 00:17:17.280 "bdev_io_pool_size": 65535, 00:17:17.280 "iobuf_large_cache_size": 16, 00:17:17.280 "iobuf_small_cache_size": 128 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "bdev_raid_set_options", 00:17:17.280 "params": { 00:17:17.280 "process_max_bandwidth_mb_sec": 0, 00:17:17.280 "process_window_size_kb": 1024 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "bdev_iscsi_set_options", 00:17:17.280 "params": { 00:17:17.280 "timeout_sec": 30 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "bdev_nvme_set_options", 00:17:17.280 "params": { 00:17:17.280 "action_on_timeout": "none", 00:17:17.280 "allow_accel_sequence": false, 00:17:17.280 "arbitration_burst": 0, 00:17:17.280 "bdev_retry_count": 3, 00:17:17.280 "ctrlr_loss_timeout_sec": 0, 00:17:17.280 "delay_cmd_submit": true, 00:17:17.280 "dhchap_dhgroups": [ 00:17:17.280 "null", 00:17:17.280 "ffdhe2048", 00:17:17.280 "ffdhe3072", 00:17:17.280 "ffdhe4096", 00:17:17.280 "ffdhe6144", 00:17:17.280 "ffdhe8192" 00:17:17.280 ], 00:17:17.280 "dhchap_digests": [ 00:17:17.280 "sha256", 00:17:17.280 "sha384", 00:17:17.280 "sha512" 00:17:17.280 ], 00:17:17.280 "disable_auto_failback": false, 00:17:17.280 "fast_io_fail_timeout_sec": 0, 00:17:17.280 "generate_uuids": false, 00:17:17.280 "high_priority_weight": 0, 00:17:17.280 "io_path_stat": false, 00:17:17.280 "io_queue_requests": 0, 00:17:17.280 "keep_alive_timeout_ms": 10000, 00:17:17.280 "low_priority_weight": 0, 00:17:17.280 "medium_priority_weight": 0, 00:17:17.280 "nvme_adminq_poll_period_us": 10000, 00:17:17.280 "nvme_error_stat": false, 00:17:17.280 "nvme_ioq_poll_period_us": 0, 00:17:17.280 "rdma_cm_event_timeout_ms": 0, 00:17:17.280 "rdma_max_cq_size": 0, 00:17:17.280 "rdma_srq_size": 0, 00:17:17.280 "reconnect_delay_sec": 0, 00:17:17.280 "timeout_admin_us": 0, 00:17:17.280 "timeout_us": 0, 00:17:17.280 "transport_ack_timeout": 0, 00:17:17.280 "transport_retry_count": 4, 00:17:17.280 "transport_tos": 0 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "bdev_nvme_set_hotplug", 00:17:17.280 "params": { 00:17:17.280 "enable": false, 00:17:17.280 "period_us": 100000 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "bdev_malloc_create", 00:17:17.280 "params": { 00:17:17.280 "block_size": 4096, 00:17:17.280 "dif_is_head_of_md": false, 00:17:17.280 "dif_pi_format": 0, 00:17:17.280 "dif_type": 0, 00:17:17.280 "md_size": 0, 00:17:17.280 "name": "malloc0", 00:17:17.280 "num_blocks": 8192, 00:17:17.280 "optimal_io_boundary": 0, 00:17:17.280 "physical_block_size": 4096, 00:17:17.280 "uuid": "c888605f-2664-4474-b983-e30386b985a8" 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "bdev_wait_for_examine" 00:17:17.280 } 00:17:17.280 ] 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "subsystem": "nbd", 00:17:17.280 "config": [] 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "subsystem": "scheduler", 00:17:17.280 "config": [ 00:17:17.280 { 00:17:17.280 "method": "framework_set_scheduler", 00:17:17.280 "params": { 00:17:17.280 "name": "static" 00:17:17.280 } 00:17:17.280 } 00:17:17.280 ] 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "subsystem": "nvmf", 00:17:17.280 "config": [ 00:17:17.280 { 00:17:17.280 "method": "nvmf_set_config", 00:17:17.280 "params": { 00:17:17.280 "admin_cmd_passthru": { 00:17:17.280 "identify_ctrlr": false 00:17:17.280 }, 00:17:17.280 "dhchap_dhgroups": [ 00:17:17.280 "null", 00:17:17.280 "ffdhe2048", 00:17:17.280 "ffdhe3072", 00:17:17.280 "ffdhe4096", 00:17:17.280 "ffdhe6144", 00:17:17.280 "ffdhe8192" 00:17:17.280 ], 00:17:17.280 "dhchap_digests": [ 00:17:17.280 "sha256", 00:17:17.280 "sha384", 00:17:17.280 "sha512" 00:17:17.280 ], 00:17:17.280 "discovery_filter": "match_any" 00:17:17.280 } 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "method": "nvmf_set_max_subsystems", 00:17:17.281 "params": { 00:17:17.281 "max_subsystems": 1024 00:17:17.281 } 00:17:17.281 }, 00:17:17.281 { 00:17:17.281 "method": "nvmf_set_crdt", 00:17:17.281 "params": { 00:17:17.281 "crdt1": 0, 00:17:17.281 "crdt2": 0, 00:17:17.281 "crdt3": 0 00:17:17.281 } 00:17:17.281 }, 00:17:17.281 { 00:17:17.281 "method": "nvmf_create_transport", 00:17:17.281 "params": { 00:17:17.281 "abort_timeout_sec": 1, 00:17:17.281 "ack_timeout": 0, 00:17:17.281 "buf_cache_size": 4294967295, 00:17:17.281 "c2h_success": false, 00:17:17.281 "data_wr_pool_size": 0, 00:17:17.281 "dif_insert_or_strip": false, 00:17:17.281 "in_capsule_data_size": 4096, 00:17:17.281 "io_unit_size": 131072, 00:17:17.281 "max_aq_depth": 128, 00:17:17.281 "max_io_qpairs_per_ctrlr": 127, 00:17:17.281 "max_io_size": 131072, 00:17:17.281 "max_queue_depth": 128, 00:17:17.281 "num_shared_buffers": 511, 00:17:17.281 "sock_priority": 0, 00:17:17.281 "trtype": "TCP", 00:17:17.281 "zcopy": false 00:17:17.281 } 00:17:17.281 }, 00:17:17.281 { 00:17:17.281 "method": "nvmf_create_subsystem", 00:17:17.281 "params": { 00:17:17.281 "allow_any_host": false, 00:17:17.281 "ana_reporting": false, 00:17:17.281 "max_cntlid": 65519, 00:17:17.281 "max_namespaces": 32, 00:17:17.281 "min_cntlid": 1, 00:17:17.281 "model_number": "SPDK bdev Controller", 00:17:17.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.281 "serial_number": "00000000000000000000" 00:17:17.281 } 00:17:17.281 }, 00:17:17.281 { 00:17:17.281 "method": "nvmf_subsystem_add_host", 00:17:17.281 "params": { 00:17:17.281 "host": "nqn.2016-06.io.spdk:host1", 00:17:17.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.281 "psk": "key0" 00:17:17.281 } 00:17:17.281 }, 00:17:17.281 { 00:17:17.281 "method": "nvmf_subsystem_add_ns", 00:17:17.281 "params": { 00:17:17.281 "namespace": { 00:17:17.281 "bdev_name": "malloc0", 00:17:17.281 "nguid": "C888605F26644474B983E30386B985A8", 00:17:17.281 "no_auto_visible": false, 00:17:17.281 "nsid": 1, 00:17:17.281 "uuid": "c888605f-2664-4474-b983-e30386b985a8" 00:17:17.281 }, 00:17:17.281 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:17.281 } 00:17:17.281 }, 00:17:17.281 { 00:17:17.281 "method": "nvmf_subsystem_add_listener", 00:17:17.281 "params": { 00:17:17.281 "listen_address": { 00:17:17.281 "adrfam": "IPv4", 00:17:17.281 "traddr": "10.0.0.3", 00:17:17.281 "trsvcid": "4420", 00:17:17.281 "trtype": "TCP" 00:17:17.281 }, 00:17:17.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.281 "secure_channel": false, 00:17:17.281 "sock_impl": "ssl" 00:17:17.281 } 00:17:17.281 } 00:17:17.281 ] 00:17:17.281 } 00:17:17.281 ] 00:17:17.281 }' 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85001 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85001 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85001 ']' 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.540 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.540 [2024-11-20 15:25:03.821987] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:17.540 [2024-11-20 15:25:03.822103] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.540 [2024-11-20 15:25:03.960515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.799 [2024-11-20 15:25:04.041018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.799 [2024-11-20 15:25:04.041088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.799 [2024-11-20 15:25:04.041099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.799 [2024-11-20 15:25:04.041107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.799 [2024-11-20 15:25:04.041114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.799 [2024-11-20 15:25:04.041571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.058 [2024-11-20 15:25:04.323110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.058 [2024-11-20 15:25:04.355019] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:18.058 [2024-11-20 15:25:04.355308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:18.318 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.318 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:18.318 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.318 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.318 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=85051 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 85051 /var/tmp/bdevperf.sock 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85051 ']' 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:18.577 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:18.577 "subsystems": [ 00:17:18.577 { 00:17:18.577 "subsystem": "keyring", 00:17:18.577 "config": [ 00:17:18.577 { 00:17:18.577 "method": "keyring_file_add_key", 00:17:18.577 "params": { 00:17:18.577 "name": "key0", 00:17:18.577 "path": "/tmp/tmp.BdodikX28j" 00:17:18.577 } 00:17:18.577 } 00:17:18.577 ] 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "subsystem": "iobuf", 00:17:18.577 "config": [ 00:17:18.577 { 00:17:18.577 "method": "iobuf_set_options", 00:17:18.577 "params": { 00:17:18.577 "enable_numa": false, 00:17:18.577 "large_bufsize": 135168, 00:17:18.577 "large_pool_count": 1024, 00:17:18.577 "small_bufsize": 8192, 00:17:18.577 "small_pool_count": 8192 00:17:18.577 } 00:17:18.577 } 00:17:18.577 ] 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "subsystem": "sock", 00:17:18.577 "config": [ 00:17:18.577 { 00:17:18.577 "method": "sock_set_default_impl", 00:17:18.577 "params": { 00:17:18.577 "impl_name": "posix" 00:17:18.577 } 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "method": "sock_impl_set_options", 00:17:18.577 "params": { 00:17:18.577 "enable_ktls": false, 00:17:18.577 "enable_placement_id": 0, 00:17:18.577 "enable_quickack": false, 00:17:18.577 "enable_recv_pipe": true, 00:17:18.577 "enable_zerocopy_send_client": false, 00:17:18.577 "enable_zerocopy_send_server": true, 00:17:18.577 "impl_name": "ssl", 00:17:18.577 "recv_buf_size": 4096, 00:17:18.577 "send_buf_size": 4096, 00:17:18.577 "tls_version": 0, 00:17:18.577 "zerocopy_threshold": 0 00:17:18.577 } 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "method": "sock_impl_set_options", 00:17:18.577 "params": { 00:17:18.577 "enable_ktls": false, 00:17:18.577 "enable_placement_id": 0, 00:17:18.577 "enable_quickack": false, 00:17:18.577 "enable_recv_pipe": true, 00:17:18.577 "enable_zerocopy_send_client": false, 00:17:18.577 "enable_zerocopy_send_server": true, 00:17:18.577 "impl_name": "posix", 00:17:18.577 "recv_buf_size": 2097152, 00:17:18.577 "send_buf_size": 2097152, 00:17:18.577 "tls_version": 0, 00:17:18.577 "zerocopy_threshold": 0 00:17:18.577 } 00:17:18.577 } 00:17:18.577 ] 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "subsystem": "vmd", 00:17:18.577 "config": [] 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "subsystem": "accel", 00:17:18.577 "config": [ 00:17:18.577 { 00:17:18.577 "method": "accel_set_options", 00:17:18.577 "params": { 00:17:18.577 "buf_count": 2048, 00:17:18.577 "large_cache_size": 16, 00:17:18.577 "sequence_count": 2048, 00:17:18.577 "small_cache_size": 128, 00:17:18.577 "task_count": 2048 00:17:18.577 } 00:17:18.577 } 00:17:18.577 ] 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "subsystem": "bdev", 00:17:18.577 "config": [ 00:17:18.577 { 00:17:18.577 "method": "bdev_set_options", 00:17:18.577 "params": { 00:17:18.577 "bdev_auto_examine": true, 00:17:18.577 "bdev_io_cache_size": 256, 00:17:18.577 "bdev_io_pool_size": 65535, 00:17:18.577 "iobuf_large_cache_size": 16, 00:17:18.577 "iobuf_small_cache_size": 128 00:17:18.577 } 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "method": "bdev_raid_set_options", 00:17:18.577 "params": { 00:17:18.577 "process_max_bandwidth_mb_sec": 0, 00:17:18.577 "process_window_size_kb": 1024 00:17:18.577 } 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "method": "bdev_iscsi_set_options", 00:17:18.577 "params": { 00:17:18.577 "timeout_sec": 30 00:17:18.577 } 00:17:18.577 }, 00:17:18.577 { 00:17:18.577 "method": "bdev_nvme_set_options", 00:17:18.577 "params": { 00:17:18.577 "action_on_timeout": "none", 00:17:18.577 "allow_accel_sequence": false, 00:17:18.577 "arbitration_burst": 0, 00:17:18.577 "bdev_retry_count": 3, 00:17:18.577 "ctrlr_loss_timeout_sec": 0, 00:17:18.577 "delay_cmd_submit": true, 00:17:18.577 "dhchap_dhgroups": [ 00:17:18.577 "null", 00:17:18.577 "ffdhe2048", 00:17:18.577 "ffdhe3072", 00:17:18.577 "ffdhe4096", 00:17:18.577 "ffdhe6144", 00:17:18.577 "ffdhe8192" 00:17:18.577 ], 00:17:18.577 "dhchap_digests": [ 00:17:18.577 "sha256", 00:17:18.577 "sha384", 00:17:18.577 "sha512" 00:17:18.577 ], 00:17:18.577 "disable_auto_failback": false, 00:17:18.577 "fast_io_fail_timeout_sec": 0, 00:17:18.577 "generate_uuids": false, 00:17:18.577 "high_priority_weight": 0, 00:17:18.577 "io_path_stat": false, 00:17:18.577 "io_queue_requests": 512, 00:17:18.577 "keep_alive_timeout_ms": 10000, 00:17:18.577 "low_priority_weight": 0, 00:17:18.577 "medium_priority_weight": 0, 00:17:18.577 "nvme_adminq_poll_period_us": 10000, 00:17:18.577 "nvme_error_stat": false, 00:17:18.577 "nvme_ioq_poll_period_us": 0, 00:17:18.577 "rdma_cm_event_timeout_ms": 0, 00:17:18.577 "rdma_max_cq_size": 0, 00:17:18.577 "rdma_srq_size": 0, 00:17:18.578 "reconnect_delay_sec": 0, 00:17:18.578 "timeout_admin_us": 0, 00:17:18.578 "timeout_us": 0, 00:17:18.578 "transport_ack_timeout": 0, 00:17:18.578 "transport_retry_count": 4, 00:17:18.578 "transport_tos": 0 00:17:18.578 } 00:17:18.578 }, 00:17:18.578 { 00:17:18.578 "method": "bdev_nvme_attach_controller", 00:17:18.578 "params": { 00:17:18.578 "adrfam": "IPv4", 00:17:18.578 "ctrlr_loss_timeout_sec": 0, 00:17:18.578 "ddgst": false, 00:17:18.578 "fast_io_fail_timeout_sec": 0, 00:17:18.578 "hdgst": false, 00:17:18.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.578 "multipath": "multipath", 00:17:18.578 "name": "nvme0", 00:17:18.578 "prchk_guard": false, 00:17:18.578 "prchk_reftag": false, 00:17:18.578 "psk": "key0", 00:17:18.578 "reconnect_delay_sec": 0, 00:17:18.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.578 "traddr": "10.0.0.3", 00:17:18.578 "trsvcid": "4420", 00:17:18.578 "trtype": "TCP" 00:17:18.578 } 00:17:18.578 }, 00:17:18.578 { 00:17:18.578 "method": "bdev_nvme_set_hotplug", 00:17:18.578 "params": { 00:17:18.578 "enable": false, 00:17:18.578 "period_us": 100000 00:17:18.578 } 00:17:18.578 }, 00:17:18.578 { 00:17:18.578 "method": "bdev_enable_histogram", 00:17:18.578 "params": { 00:17:18.578 "enable": true, 00:17:18.578 "name": "nvme0n1" 00:17:18.578 } 00:17:18.578 }, 00:17:18.578 { 00:17:18.578 "method": "bdev_wait_for_examine" 00:17:18.578 } 00:17:18.578 ] 00:17:18.578 }, 00:17:18.578 { 00:17:18.578 "subsystem": "nbd", 00:17:18.578 "config": [] 00:17:18.578 } 00:17:18.578 ] 00:17:18.578 }' 00:17:18.578 [2024-11-20 15:25:04.904169] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:18.578 [2024-11-20 15:25:04.904287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85051 ] 00:17:18.837 [2024-11-20 15:25:05.059552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.837 [2024-11-20 15:25:05.142669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.095 [2024-11-20 15:25:05.358303] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.663 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.663 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:19.663 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:19.663 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:19.663 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.663 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:19.921 Running I/O for 1 seconds... 00:17:20.859 5502.00 IOPS, 21.49 MiB/s 00:17:20.859 Latency(us) 00:17:20.859 [2024-11-20T15:25:07.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.859 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:20.859 Verification LBA range: start 0x0 length 0x2000 00:17:20.859 nvme0n1 : 1.01 5563.11 21.73 0.00 0.00 22845.77 4395.39 19792.40 00:17:20.859 [2024-11-20T15:25:07.341Z] =================================================================================================================== 00:17:20.859 [2024-11-20T15:25:07.341Z] Total : 5563.11 21.73 0.00 0.00 22845.77 4395.39 19792.40 00:17:20.859 { 00:17:20.859 "results": [ 00:17:20.859 { 00:17:20.859 "job": "nvme0n1", 00:17:20.859 "core_mask": "0x2", 00:17:20.859 "workload": "verify", 00:17:20.859 "status": "finished", 00:17:20.859 "verify_range": { 00:17:20.859 "start": 0, 00:17:20.859 "length": 8192 00:17:20.859 }, 00:17:20.859 "queue_depth": 128, 00:17:20.859 "io_size": 4096, 00:17:20.859 "runtime": 1.012203, 00:17:20.859 "iops": 5563.113328057711, 00:17:20.859 "mibps": 21.730911437725435, 00:17:20.859 "io_failed": 0, 00:17:20.859 "io_timeout": 0, 00:17:20.859 "avg_latency_us": 22845.768579699725, 00:17:20.859 "min_latency_us": 4395.386345381527, 00:17:20.859 "max_latency_us": 19792.398393574298 00:17:20.859 } 00:17:20.859 ], 00:17:20.859 "core_count": 1 00:17:20.859 } 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:20.859 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:20.859 nvmf_trace.0 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85051 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85051 ']' 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85051 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85051 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:21.118 killing process with pid 85051 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85051' 00:17:21.118 Received shutdown signal, test time was about 1.000000 seconds 00:17:21.118 00:17:21.118 Latency(us) 00:17:21.118 [2024-11-20T15:25:07.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.118 [2024-11-20T15:25:07.600Z] =================================================================================================================== 00:17:21.118 [2024-11-20T15:25:07.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85051 00:17:21.118 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85051 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:21.376 rmmod nvme_tcp 00:17:21.376 rmmod nvme_fabrics 00:17:21.376 rmmod nvme_keyring 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.376 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 85001 ']' 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 85001 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85001 ']' 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85001 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85001 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.634 killing process with pid 85001 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85001' 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85001 00:17:21.634 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85001 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:21.893 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.YYxD6CufUe /tmp/tmp.XbzFuFoDJS /tmp/tmp.BdodikX28j 00:17:22.152 00:17:22.152 real 1m31.871s 00:17:22.152 user 2m23.087s 00:17:22.152 sys 0m33.750s 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.152 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.152 ************************************ 00:17:22.152 END TEST nvmf_tls 00:17:22.152 ************************************ 00:17:22.153 15:25:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:22.153 15:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.153 15:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.153 15:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.153 ************************************ 00:17:22.153 START TEST nvmf_fips 00:17:22.153 ************************************ 00:17:22.153 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:22.412 * Looking for test storage... 00:17:22.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.412 --rc genhtml_branch_coverage=1 00:17:22.412 --rc genhtml_function_coverage=1 00:17:22.412 --rc genhtml_legend=1 00:17:22.412 --rc geninfo_all_blocks=1 00:17:22.412 --rc geninfo_unexecuted_blocks=1 00:17:22.412 00:17:22.412 ' 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.412 --rc genhtml_branch_coverage=1 00:17:22.412 --rc genhtml_function_coverage=1 00:17:22.412 --rc genhtml_legend=1 00:17:22.412 --rc geninfo_all_blocks=1 00:17:22.412 --rc geninfo_unexecuted_blocks=1 00:17:22.412 00:17:22.412 ' 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.412 --rc genhtml_branch_coverage=1 00:17:22.412 --rc genhtml_function_coverage=1 00:17:22.412 --rc genhtml_legend=1 00:17:22.412 --rc geninfo_all_blocks=1 00:17:22.412 --rc geninfo_unexecuted_blocks=1 00:17:22.412 00:17:22.412 ' 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.412 --rc genhtml_branch_coverage=1 00:17:22.412 --rc genhtml_function_coverage=1 00:17:22.412 --rc genhtml_legend=1 00:17:22.412 --rc geninfo_all_blocks=1 00:17:22.412 --rc geninfo_unexecuted_blocks=1 00:17:22.412 00:17:22.412 ' 00:17:22.412 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.413 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.413 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:22.673 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:22.673 Error setting digest 00:17:22.673 4012F212587F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:22.673 4012F212587F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:22.673 Cannot find device "nvmf_init_br" 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:22.673 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:22.673 Cannot find device "nvmf_init_br2" 00:17:22.674 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:22.674 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:22.674 Cannot find device "nvmf_tgt_br" 00:17:22.674 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:22.674 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.931 Cannot find device "nvmf_tgt_br2" 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:22.931 Cannot find device "nvmf_init_br" 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:22.931 Cannot find device "nvmf_init_br2" 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:22.931 Cannot find device "nvmf_tgt_br" 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:22.931 Cannot find device "nvmf_tgt_br2" 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:22.931 Cannot find device "nvmf_br" 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:22.931 Cannot find device "nvmf_init_if" 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:22.931 Cannot find device "nvmf_init_if2" 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:22.931 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:23.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.138 ms 00:17:23.191 00:17:23.191 --- 10.0.0.3 ping statistics --- 00:17:23.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.191 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:23.191 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:23.191 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:17:23.191 00:17:23.191 --- 10.0.0.4 ping statistics --- 00:17:23.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.191 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:17:23.191 00:17:23.191 --- 10.0.0.1 ping statistics --- 00:17:23.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.191 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:23.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:17:23.191 00:17:23.191 --- 10.0.0.2 ping statistics --- 00:17:23.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.191 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.191 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=85387 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 85387 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85387 ']' 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.192 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:23.451 [2024-11-20 15:25:09.671670] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:23.451 [2024-11-20 15:25:09.671799] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.451 [2024-11-20 15:25:09.832020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.451 [2024-11-20 15:25:09.912356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.451 [2024-11-20 15:25:09.912427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.451 [2024-11-20 15:25:09.912438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.451 [2024-11-20 15:25:09.912446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.451 [2024-11-20 15:25:09.912454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.451 [2024-11-20 15:25:09.912858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.LSK 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.LSK 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.LSK 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.LSK 00:17:24.393 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.653 [2024-11-20 15:25:10.886108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.653 [2024-11-20 15:25:10.902074] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.653 [2024-11-20 15:25:10.902374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:24.653 malloc0 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85441 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85441 /var/tmp/bdevperf.sock 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85441 ']' 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.653 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:24.653 [2024-11-20 15:25:11.060569] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:24.653 [2024-11-20 15:25:11.060663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85441 ] 00:17:24.912 [2024-11-20 15:25:11.196021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.912 [2024-11-20 15:25:11.279967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.850 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.851 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:25.851 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.LSK 00:17:25.851 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:26.110 [2024-11-20 15:25:12.428593] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.110 TLSTESTn1 00:17:26.110 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:26.369 Running I/O for 10 seconds... 00:17:28.240 5549.00 IOPS, 21.68 MiB/s [2024-11-20T15:25:15.658Z] 5405.00 IOPS, 21.11 MiB/s [2024-11-20T15:25:17.037Z] 5427.33 IOPS, 21.20 MiB/s [2024-11-20T15:25:17.972Z] 5463.75 IOPS, 21.34 MiB/s [2024-11-20T15:25:18.910Z] 5500.00 IOPS, 21.48 MiB/s [2024-11-20T15:25:19.848Z] 5522.67 IOPS, 21.57 MiB/s [2024-11-20T15:25:20.786Z] 5546.43 IOPS, 21.67 MiB/s [2024-11-20T15:25:21.723Z] 5544.38 IOPS, 21.66 MiB/s [2024-11-20T15:25:22.658Z] 5540.89 IOPS, 21.64 MiB/s [2024-11-20T15:25:22.658Z] 5543.40 IOPS, 21.65 MiB/s 00:17:36.176 Latency(us) 00:17:36.176 [2024-11-20T15:25:22.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.176 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.176 Verification LBA range: start 0x0 length 0x2000 00:17:36.176 TLSTESTn1 : 10.01 5549.57 21.68 0.00 0.00 23029.39 4184.83 24214.10 00:17:36.176 [2024-11-20T15:25:22.658Z] =================================================================================================================== 00:17:36.176 [2024-11-20T15:25:22.658Z] Total : 5549.57 21.68 0.00 0.00 23029.39 4184.83 24214.10 00:17:36.176 { 00:17:36.176 "results": [ 00:17:36.176 { 00:17:36.176 "job": "TLSTESTn1", 00:17:36.176 "core_mask": "0x4", 00:17:36.176 "workload": "verify", 00:17:36.176 "status": "finished", 00:17:36.176 "verify_range": { 00:17:36.176 "start": 0, 00:17:36.176 "length": 8192 00:17:36.176 }, 00:17:36.176 "queue_depth": 128, 00:17:36.176 "io_size": 4096, 00:17:36.176 "runtime": 10.0114, 00:17:36.176 "iops": 5549.573486225702, 00:17:36.176 "mibps": 21.67802143056915, 00:17:36.176 "io_failed": 0, 00:17:36.176 "io_timeout": 0, 00:17:36.176 "avg_latency_us": 23029.390347321358, 00:17:36.176 "min_latency_us": 4184.828915662651, 00:17:36.176 "max_latency_us": 24214.104417670682 00:17:36.176 } 00:17:36.176 ], 00:17:36.176 "core_count": 1 00:17:36.176 } 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:36.434 nvmf_trace.0 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85441 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85441 ']' 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85441 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85441 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:36.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:36.435 killing process with pid 85441 00:17:36.435 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85441' 00:17:36.435 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85441 00:17:36.435 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.435 00:17:36.435 Latency(us) 00:17:36.435 [2024-11-20T15:25:22.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.435 [2024-11-20T15:25:22.917Z] =================================================================================================================== 00:17:36.435 [2024-11-20T15:25:22.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.435 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85441 00:17:36.697 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:36.697 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.697 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:36.697 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.697 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:36.697 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.697 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.697 rmmod nvme_tcp 00:17:36.697 rmmod nvme_fabrics 00:17:36.959 rmmod nvme_keyring 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 85387 ']' 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 85387 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85387 ']' 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85387 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85387 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85387' 00:17:36.959 killing process with pid 85387 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85387 00:17:36.959 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85387 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:37.218 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:37.476 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.LSK 00:17:37.477 00:17:37.477 real 0m15.290s 00:17:37.477 user 0m19.923s 00:17:37.477 sys 0m6.760s 00:17:37.477 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.477 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:37.477 ************************************ 00:17:37.477 END TEST nvmf_fips 00:17:37.477 ************************************ 00:17:37.477 15:25:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:37.477 15:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.477 15:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.477 15:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.477 ************************************ 00:17:37.477 START TEST nvmf_control_msg_list 00:17:37.477 ************************************ 00:17:37.477 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:37.735 * Looking for test storage... 00:17:37.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:37.735 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:37.735 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:17:37.735 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:37.735 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:37.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.736 --rc genhtml_branch_coverage=1 00:17:37.736 --rc genhtml_function_coverage=1 00:17:37.736 --rc genhtml_legend=1 00:17:37.736 --rc geninfo_all_blocks=1 00:17:37.736 --rc geninfo_unexecuted_blocks=1 00:17:37.736 00:17:37.736 ' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:37.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.736 --rc genhtml_branch_coverage=1 00:17:37.736 --rc genhtml_function_coverage=1 00:17:37.736 --rc genhtml_legend=1 00:17:37.736 --rc geninfo_all_blocks=1 00:17:37.736 --rc geninfo_unexecuted_blocks=1 00:17:37.736 00:17:37.736 ' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:37.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.736 --rc genhtml_branch_coverage=1 00:17:37.736 --rc genhtml_function_coverage=1 00:17:37.736 --rc genhtml_legend=1 00:17:37.736 --rc geninfo_all_blocks=1 00:17:37.736 --rc geninfo_unexecuted_blocks=1 00:17:37.736 00:17:37.736 ' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:37.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.736 --rc genhtml_branch_coverage=1 00:17:37.736 --rc genhtml_function_coverage=1 00:17:37.736 --rc genhtml_legend=1 00:17:37.736 --rc geninfo_all_blocks=1 00:17:37.736 --rc geninfo_unexecuted_blocks=1 00:17:37.736 00:17:37.736 ' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.736 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.737 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.737 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.737 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.737 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.737 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.737 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:37.995 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:37.996 Cannot find device "nvmf_init_br" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:37.996 Cannot find device "nvmf_init_br2" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:37.996 Cannot find device "nvmf_tgt_br" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.996 Cannot find device "nvmf_tgt_br2" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:37.996 Cannot find device "nvmf_init_br" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:37.996 Cannot find device "nvmf_init_br2" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:37.996 Cannot find device "nvmf_tgt_br" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:37.996 Cannot find device "nvmf_tgt_br2" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:37.996 Cannot find device "nvmf_br" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:37.996 Cannot find device "nvmf_init_if" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:37.996 Cannot find device "nvmf_init_if2" 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:37.996 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:38.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:38.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:17:38.255 00:17:38.255 --- 10.0.0.3 ping statistics --- 00:17:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.255 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:38.255 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:38.255 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:17:38.255 00:17:38.255 --- 10.0.0.4 ping statistics --- 00:17:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.255 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:38.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:38.255 00:17:38.255 --- 10.0.0.1 ping statistics --- 00:17:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.255 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:38.255 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:38.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:17:38.255 00:17:38.255 --- 10.0.0.2 ping statistics --- 00:17:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.255 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.256 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.514 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:38.514 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.514 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.514 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:38.514 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85855 00:17:38.515 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:38.515 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85855 00:17:38.515 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85855 ']' 00:17:38.515 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.515 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.515 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.515 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.515 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:38.515 [2024-11-20 15:25:24.804525] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:38.515 [2024-11-20 15:25:24.804628] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.515 [2024-11-20 15:25:24.958396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.773 [2024-11-20 15:25:25.039518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.773 [2024-11-20 15:25:25.039582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.773 [2024-11-20 15:25:25.039593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.773 [2024-11-20 15:25:25.039601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.773 [2024-11-20 15:25:25.039609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.773 [2024-11-20 15:25:25.040004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:39.600 [2024-11-20 15:25:25.824276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:39.600 Malloc0 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:39.600 [2024-11-20 15:25:25.875211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85905 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85906 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85907 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:39.600 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85905 00:17:39.600 [2024-11-20 15:25:26.079594] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:39.600 [2024-11-20 15:25:26.079847] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:39.600 [2024-11-20 15:25:26.080002] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:40.977 Initializing NVMe Controllers 00:17:40.977 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:40.977 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:40.977 Initialization complete. Launching workers. 00:17:40.977 ======================================================== 00:17:40.977 Latency(us) 00:17:40.977 Device Information : IOPS MiB/s Average min max 00:17:40.977 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4809.00 18.79 207.71 117.11 846.90 00:17:40.977 ======================================================== 00:17:40.977 Total : 4809.00 18.79 207.71 117.11 846.90 00:17:40.977 00:17:40.977 Initializing NVMe Controllers 00:17:40.977 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:40.977 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:40.977 Initialization complete. Launching workers. 00:17:40.977 ======================================================== 00:17:40.977 Latency(us) 00:17:40.977 Device Information : IOPS MiB/s Average min max 00:17:40.977 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4799.00 18.75 208.14 139.87 610.13 00:17:40.977 ======================================================== 00:17:40.977 Total : 4799.00 18.75 208.14 139.87 610.13 00:17:40.977 00:17:40.977 Initializing NVMe Controllers 00:17:40.977 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:40.977 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:40.977 Initialization complete. Launching workers. 00:17:40.977 ======================================================== 00:17:40.977 Latency(us) 00:17:40.977 Device Information : IOPS MiB/s Average min max 00:17:40.977 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4803.00 18.76 207.97 139.43 409.03 00:17:40.977 ======================================================== 00:17:40.977 Total : 4803.00 18.76 207.97 139.43 409.03 00:17:40.977 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85906 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85907 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.977 rmmod nvme_tcp 00:17:40.977 rmmod nvme_fabrics 00:17:40.977 rmmod nvme_keyring 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85855 ']' 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85855 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85855 ']' 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85855 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85855 00:17:40.977 killing process with pid 85855 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85855' 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85855 00:17:40.977 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85855 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:41.236 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:41.496 00:17:41.496 real 0m3.968s 00:17:41.496 user 0m5.434s 00:17:41.496 sys 0m1.979s 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.496 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:41.496 ************************************ 00:17:41.496 END TEST nvmf_control_msg_list 00:17:41.496 ************************************ 00:17:41.756 15:25:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:41.756 15:25:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.756 15:25:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.756 15:25:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.756 ************************************ 00:17:41.756 START TEST nvmf_wait_for_buf 00:17:41.756 ************************************ 00:17:41.756 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:41.756 * Looking for test storage... 00:17:41.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:41.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.756 --rc genhtml_branch_coverage=1 00:17:41.756 --rc genhtml_function_coverage=1 00:17:41.756 --rc genhtml_legend=1 00:17:41.756 --rc geninfo_all_blocks=1 00:17:41.756 --rc geninfo_unexecuted_blocks=1 00:17:41.756 00:17:41.756 ' 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:41.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.756 --rc genhtml_branch_coverage=1 00:17:41.756 --rc genhtml_function_coverage=1 00:17:41.756 --rc genhtml_legend=1 00:17:41.756 --rc geninfo_all_blocks=1 00:17:41.756 --rc geninfo_unexecuted_blocks=1 00:17:41.756 00:17:41.756 ' 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:41.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.756 --rc genhtml_branch_coverage=1 00:17:41.756 --rc genhtml_function_coverage=1 00:17:41.756 --rc genhtml_legend=1 00:17:41.756 --rc geninfo_all_blocks=1 00:17:41.756 --rc geninfo_unexecuted_blocks=1 00:17:41.756 00:17:41.756 ' 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:41.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.756 --rc genhtml_branch_coverage=1 00:17:41.756 --rc genhtml_function_coverage=1 00:17:41.756 --rc genhtml_legend=1 00:17:41.756 --rc geninfo_all_blocks=1 00:17:41.756 --rc geninfo_unexecuted_blocks=1 00:17:41.756 00:17:41.756 ' 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.756 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.757 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.757 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.757 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.757 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.016 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.017 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:42.017 Cannot find device "nvmf_init_br" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:42.017 Cannot find device "nvmf_init_br2" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:42.017 Cannot find device "nvmf_tgt_br" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.017 Cannot find device "nvmf_tgt_br2" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:42.017 Cannot find device "nvmf_init_br" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:42.017 Cannot find device "nvmf_init_br2" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:42.017 Cannot find device "nvmf_tgt_br" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:42.017 Cannot find device "nvmf_tgt_br2" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:42.017 Cannot find device "nvmf_br" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:42.017 Cannot find device "nvmf_init_if" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:42.017 Cannot find device "nvmf_init_if2" 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:42.017 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:42.285 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.544 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.544 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.544 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:42.544 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:42.544 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:42.544 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.544 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:42.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:17:42.545 00:17:42.545 --- 10.0.0.3 ping statistics --- 00:17:42.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.545 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:42.545 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:42.545 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.125 ms 00:17:42.545 00:17:42.545 --- 10.0.0.4 ping statistics --- 00:17:42.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.545 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:17:42.545 00:17:42.545 --- 10.0.0.1 ping statistics --- 00:17:42.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.545 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:42.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:17:42.545 00:17:42.545 --- 10.0.0.2 ping statistics --- 00:17:42.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.545 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=86149 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 86149 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 86149 ']' 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.545 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.545 [2024-11-20 15:25:28.942303] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:42.545 [2024-11-20 15:25:28.942412] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.804 [2024-11-20 15:25:29.097182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.804 [2024-11-20 15:25:29.178185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.804 [2024-11-20 15:25:29.178271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.804 [2024-11-20 15:25:29.178282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.804 [2024-11-20 15:25:29.178291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.804 [2024-11-20 15:25:29.178298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.804 [2024-11-20 15:25:29.178704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 Malloc0 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 [2024-11-20 15:25:30.119343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:43.743 [2024-11-20 15:25:30.159452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.743 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:44.002 [2024-11-20 15:25:30.359827] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:45.383 Initializing NVMe Controllers 00:17:45.383 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:45.383 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:45.383 Initialization complete. Launching workers. 00:17:45.383 ======================================================== 00:17:45.383 Latency(us) 00:17:45.383 Device Information : IOPS MiB/s Average min max 00:17:45.383 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.54 15.94 32488.13 8033.08 66094.96 00:17:45.383 ======================================================== 00:17:45.383 Total : 127.54 15.94 32488.13 8033.08 66094.96 00:17:45.383 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.383 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:45.384 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.384 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:45.384 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.384 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.384 rmmod nvme_tcp 00:17:45.384 rmmod nvme_fabrics 00:17:45.384 rmmod nvme_keyring 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 86149 ']' 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 86149 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 86149 ']' 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 86149 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86149 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.643 killing process with pid 86149 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86149' 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 86149 00:17:45.643 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 86149 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:45.903 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:46.162 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.162 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.162 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:46.162 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.162 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.162 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.162 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:46.162 00:17:46.162 real 0m4.503s 00:17:46.162 user 0m3.652s 00:17:46.162 sys 0m1.208s 00:17:46.162 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.163 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:46.163 ************************************ 00:17:46.163 END TEST nvmf_wait_for_buf 00:17:46.163 ************************************ 00:17:46.163 15:25:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:46.163 15:25:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:46.163 15:25:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:46.163 15:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:46.163 15:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.163 15:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:46.163 ************************************ 00:17:46.163 START TEST nvmf_nsid 00:17:46.163 ************************************ 00:17:46.163 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:46.423 * Looking for test storage... 00:17:46.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:46.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.423 --rc genhtml_branch_coverage=1 00:17:46.423 --rc genhtml_function_coverage=1 00:17:46.423 --rc genhtml_legend=1 00:17:46.423 --rc geninfo_all_blocks=1 00:17:46.423 --rc geninfo_unexecuted_blocks=1 00:17:46.423 00:17:46.423 ' 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:46.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.423 --rc genhtml_branch_coverage=1 00:17:46.423 --rc genhtml_function_coverage=1 00:17:46.423 --rc genhtml_legend=1 00:17:46.423 --rc geninfo_all_blocks=1 00:17:46.423 --rc geninfo_unexecuted_blocks=1 00:17:46.423 00:17:46.423 ' 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:46.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.423 --rc genhtml_branch_coverage=1 00:17:46.423 --rc genhtml_function_coverage=1 00:17:46.423 --rc genhtml_legend=1 00:17:46.423 --rc geninfo_all_blocks=1 00:17:46.423 --rc geninfo_unexecuted_blocks=1 00:17:46.423 00:17:46.423 ' 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:46.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.423 --rc genhtml_branch_coverage=1 00:17:46.423 --rc genhtml_function_coverage=1 00:17:46.423 --rc genhtml_legend=1 00:17:46.423 --rc geninfo_all_blocks=1 00:17:46.423 --rc geninfo_unexecuted_blocks=1 00:17:46.423 00:17:46.423 ' 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:17:46.423 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.424 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:46.424 Cannot find device "nvmf_init_br" 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:46.424 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:46.684 Cannot find device "nvmf_init_br2" 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:46.684 Cannot find device "nvmf_tgt_br" 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.684 Cannot find device "nvmf_tgt_br2" 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:46.684 Cannot find device "nvmf_init_br" 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:46.684 Cannot find device "nvmf_init_br2" 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:46.684 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:46.684 Cannot find device "nvmf_tgt_br" 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:46.684 Cannot find device "nvmf_tgt_br2" 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:46.684 Cannot find device "nvmf_br" 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:46.684 Cannot find device "nvmf_init_if" 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:46.684 Cannot find device "nvmf_init_if2" 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.684 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:46.944 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.944 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.154 ms 00:17:46.944 00:17:46.944 --- 10.0.0.3 ping statistics --- 00:17:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.944 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:46.944 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:46.944 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:46.944 00:17:46.944 --- 10.0.0.4 ping statistics --- 00:17:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.944 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:17:46.944 00:17:46.944 --- 10.0.0.1 ping statistics --- 00:17:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.944 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:46.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:17:46.944 00:17:46.944 --- 10.0.0.2 ping statistics --- 00:17:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.944 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=86439 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 86439 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86439 ']' 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.944 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.945 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.945 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:47.205 [2024-11-20 15:25:33.474931] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:47.205 [2024-11-20 15:25:33.475028] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.205 [2024-11-20 15:25:33.630327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.464 [2024-11-20 15:25:33.711785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.464 [2024-11-20 15:25:33.711856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.464 [2024-11-20 15:25:33.711866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.465 [2024-11-20 15:25:33.711875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.465 [2024-11-20 15:25:33.711882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.465 [2024-11-20 15:25:33.712292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=86482 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=f0be1126-f826-4eeb-a61b-8464a6843357 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=63ff710f-e152-4898-814b-9f26827dafe6 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=3a22cacc-9b3b-4370-832b-9d982a90184c 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.034 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:48.294 null0 00:17:48.294 [2024-11-20 15:25:34.530542] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:48.294 [2024-11-20 15:25:34.530632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86482 ] 00:17:48.294 null1 00:17:48.294 null2 00:17:48.294 [2024-11-20 15:25:34.542794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.294 [2024-11-20 15:25:34.566932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.294 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.294 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 86482 /var/tmp/tgt2.sock 00:17:48.294 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86482 ']' 00:17:48.294 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:48.294 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:48.294 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:48.294 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.294 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:48.294 [2024-11-20 15:25:34.685039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.294 [2024-11-20 15:25:34.768028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.863 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.863 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:48.863 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:49.122 [2024-11-20 15:25:35.507465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.122 [2024-11-20 15:25:35.523711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:49.122 nvme0n1 nvme0n2 00:17:49.122 nvme1n1 00:17:49.122 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:49.122 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:49.122 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:49.445 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid f0be1126-f826-4eeb-a61b-8464a6843357 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f0be1126f8264eeba61b8464a6843357 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F0BE1126F8264EEBA61B8464A6843357 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ F0BE1126F8264EEBA61B8464A6843357 == \F\0\B\E\1\1\2\6\F\8\2\6\4\E\E\B\A\6\1\B\8\4\6\4\A\6\8\4\3\3\5\7 ]] 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:50.379 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 63ff710f-e152-4898-814b-9f26827dafe6 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=63ff710fe1524898814b9f26827dafe6 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 63FF710FE1524898814B9F26827DAFE6 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 63FF710FE1524898814B9F26827DAFE6 == \6\3\F\F\7\1\0\F\E\1\5\2\4\8\9\8\8\1\4\B\9\F\2\6\8\2\7\D\A\F\E\6 ]] 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:50.637 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 3a22cacc-9b3b-4370-832b-9d982a90184c 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:50.638 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:50.638 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3a22cacc9b3b4370832b9d982a90184c 00:17:50.638 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3A22CACC9B3B4370832B9D982A90184C 00:17:50.638 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 3A22CACC9B3B4370832B9D982A90184C == \3\A\2\2\C\A\C\C\9\B\3\B\4\3\7\0\8\3\2\B\9\D\9\8\2\A\9\0\1\8\4\C ]] 00:17:50.638 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 86482 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86482 ']' 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86482 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86482 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:50.896 killing process with pid 86482 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86482' 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86482 00:17:50.896 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86482 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.463 rmmod nvme_tcp 00:17:51.463 rmmod nvme_fabrics 00:17:51.463 rmmod nvme_keyring 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 86439 ']' 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 86439 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86439 ']' 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86439 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86439 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.463 killing process with pid 86439 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86439' 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86439 00:17:51.463 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86439 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:51.722 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.980 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.239 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:52.239 00:17:52.239 real 0m5.921s 00:17:52.239 user 0m8.202s 00:17:52.239 sys 0m2.095s 00:17:52.239 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.239 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 ************************************ 00:17:52.239 END TEST nvmf_nsid 00:17:52.239 ************************************ 00:17:52.239 15:25:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:52.239 00:17:52.239 real 7m19.797s 00:17:52.239 user 16m52.300s 00:17:52.239 sys 2m1.288s 00:17:52.239 15:25:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.239 15:25:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 ************************************ 00:17:52.239 END TEST nvmf_target_extra 00:17:52.239 ************************************ 00:17:52.239 15:25:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:52.239 15:25:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.239 15:25:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.239 15:25:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 ************************************ 00:17:52.239 START TEST nvmf_host 00:17:52.239 ************************************ 00:17:52.239 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:52.499 * Looking for test storage... 00:17:52.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.499 --rc genhtml_branch_coverage=1 00:17:52.499 --rc genhtml_function_coverage=1 00:17:52.499 --rc genhtml_legend=1 00:17:52.499 --rc geninfo_all_blocks=1 00:17:52.499 --rc geninfo_unexecuted_blocks=1 00:17:52.499 00:17:52.499 ' 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.499 --rc genhtml_branch_coverage=1 00:17:52.499 --rc genhtml_function_coverage=1 00:17:52.499 --rc genhtml_legend=1 00:17:52.499 --rc geninfo_all_blocks=1 00:17:52.499 --rc geninfo_unexecuted_blocks=1 00:17:52.499 00:17:52.499 ' 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.499 --rc genhtml_branch_coverage=1 00:17:52.499 --rc genhtml_function_coverage=1 00:17:52.499 --rc genhtml_legend=1 00:17:52.499 --rc geninfo_all_blocks=1 00:17:52.499 --rc geninfo_unexecuted_blocks=1 00:17:52.499 00:17:52.499 ' 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.499 --rc genhtml_branch_coverage=1 00:17:52.499 --rc genhtml_function_coverage=1 00:17:52.499 --rc genhtml_legend=1 00:17:52.499 --rc geninfo_all_blocks=1 00:17:52.499 --rc geninfo_unexecuted_blocks=1 00:17:52.499 00:17:52.499 ' 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:17:52.499 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.500 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.500 ************************************ 00:17:52.500 START TEST nvmf_multicontroller 00:17:52.500 ************************************ 00:17:52.500 15:25:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:52.760 * Looking for test storage... 00:17:52.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:17:52.760 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:52.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.761 --rc genhtml_branch_coverage=1 00:17:52.761 --rc genhtml_function_coverage=1 00:17:52.761 --rc genhtml_legend=1 00:17:52.761 --rc geninfo_all_blocks=1 00:17:52.761 --rc geninfo_unexecuted_blocks=1 00:17:52.761 00:17:52.761 ' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:52.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.761 --rc genhtml_branch_coverage=1 00:17:52.761 --rc genhtml_function_coverage=1 00:17:52.761 --rc genhtml_legend=1 00:17:52.761 --rc geninfo_all_blocks=1 00:17:52.761 --rc geninfo_unexecuted_blocks=1 00:17:52.761 00:17:52.761 ' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:52.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.761 --rc genhtml_branch_coverage=1 00:17:52.761 --rc genhtml_function_coverage=1 00:17:52.761 --rc genhtml_legend=1 00:17:52.761 --rc geninfo_all_blocks=1 00:17:52.761 --rc geninfo_unexecuted_blocks=1 00:17:52.761 00:17:52.761 ' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:52.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.761 --rc genhtml_branch_coverage=1 00:17:52.761 --rc genhtml_function_coverage=1 00:17:52.761 --rc genhtml_legend=1 00:17:52.761 --rc geninfo_all_blocks=1 00:17:52.761 --rc geninfo_unexecuted_blocks=1 00:17:52.761 00:17:52.761 ' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.761 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.761 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.762 Cannot find device "nvmf_init_br" 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:52.762 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:53.019 Cannot find device "nvmf_init_br2" 00:17:53.019 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:53.019 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:53.019 Cannot find device "nvmf_tgt_br" 00:17:53.019 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.020 Cannot find device "nvmf_tgt_br2" 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:53.020 Cannot find device "nvmf_init_br" 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:53.020 Cannot find device "nvmf_init_br2" 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:53.020 Cannot find device "nvmf_tgt_br" 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:53.020 Cannot find device "nvmf_tgt_br2" 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:53.020 Cannot find device "nvmf_br" 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:53.020 Cannot find device "nvmf_init_if" 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:53.020 Cannot find device "nvmf_init_if2" 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:53.020 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:53.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:53.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.135 ms 00:17:53.279 00:17:53.279 --- 10.0.0.3 ping statistics --- 00:17:53.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.279 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:17:53.279 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:53.538 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:53.538 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:17:53.538 00:17:53.538 --- 10.0.0.4 ping statistics --- 00:17:53.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.538 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:53.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:17:53.538 00:17:53.538 --- 10.0.0.1 ping statistics --- 00:17:53.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.538 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:53.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:17:53.538 00:17:53.538 --- 10.0.0.2 ping statistics --- 00:17:53.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.538 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86858 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86858 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86858 ']' 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.538 15:25:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:53.538 [2024-11-20 15:25:39.885300] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:53.538 [2024-11-20 15:25:39.885401] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.796 [2024-11-20 15:25:40.042008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:53.796 [2024-11-20 15:25:40.125771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.796 [2024-11-20 15:25:40.125842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.796 [2024-11-20 15:25:40.125854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.796 [2024-11-20 15:25:40.125863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.796 [2024-11-20 15:25:40.125870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.796 [2024-11-20 15:25:40.127267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.796 [2024-11-20 15:25:40.128097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.796 [2024-11-20 15:25:40.128099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 [2024-11-20 15:25:40.918438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 Malloc0 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.731 15:25:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 [2024-11-20 15:25:41.002024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.731 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.731 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:54.731 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.731 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 [2024-11-20 15:25:41.013830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:54.731 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.731 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:54.731 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 Malloc1 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86916 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86916 /var/tmp/bdevperf.sock 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86916 ']' 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.732 15:25:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.685 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.685 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:17:55.685 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:17:55.685 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.685 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.971 NVMe0n1 00:17:55.971 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.971 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:55.971 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:55.971 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.971 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.971 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.971 1 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 2024/11/20 15:25:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:55.972 request: 00:17:55.972 { 00:17:55.972 "method": "bdev_nvme_attach_controller", 00:17:55.972 "params": { 00:17:55.972 "name": "NVMe0", 00:17:55.972 "trtype": "tcp", 00:17:55.972 "traddr": "10.0.0.3", 00:17:55.972 "adrfam": "ipv4", 00:17:55.972 "trsvcid": "4420", 00:17:55.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.972 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:55.972 "hostaddr": "10.0.0.1", 00:17:55.972 "prchk_reftag": false, 00:17:55.972 "prchk_guard": false, 00:17:55.972 "hdgst": false, 00:17:55.972 "ddgst": false, 00:17:55.972 "allow_unrecognized_csi": false 00:17:55.972 } 00:17:55.972 } 00:17:55.972 Got JSON-RPC error response 00:17:55.972 GoRPCClient: error on JSON-RPC call 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 2024/11/20 15:25:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:55.972 request: 00:17:55.972 { 00:17:55.972 "method": "bdev_nvme_attach_controller", 00:17:55.972 "params": { 00:17:55.972 "name": "NVMe0", 00:17:55.972 "trtype": "tcp", 00:17:55.972 "traddr": "10.0.0.3", 00:17:55.972 "adrfam": "ipv4", 00:17:55.972 "trsvcid": "4420", 00:17:55.972 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:55.972 "hostaddr": "10.0.0.1", 00:17:55.972 "prchk_reftag": false, 00:17:55.972 "prchk_guard": false, 00:17:55.972 "hdgst": false, 00:17:55.972 "ddgst": false, 00:17:55.972 "allow_unrecognized_csi": false 00:17:55.972 } 00:17:55.972 } 00:17:55.972 Got JSON-RPC error response 00:17:55.972 GoRPCClient: error on JSON-RPC call 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 2024/11/20 15:25:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:55.972 request: 00:17:55.972 { 00:17:55.972 "method": "bdev_nvme_attach_controller", 00:17:55.972 "params": { 00:17:55.972 "name": "NVMe0", 00:17:55.972 "trtype": "tcp", 00:17:55.972 "traddr": "10.0.0.3", 00:17:55.972 "adrfam": "ipv4", 00:17:55.972 "trsvcid": "4420", 00:17:55.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.972 "hostaddr": "10.0.0.1", 00:17:55.972 "prchk_reftag": false, 00:17:55.972 "prchk_guard": false, 00:17:55.972 "hdgst": false, 00:17:55.972 "ddgst": false, 00:17:55.972 "multipath": "disable", 00:17:55.972 "allow_unrecognized_csi": false 00:17:55.972 } 00:17:55.972 } 00:17:55.972 Got JSON-RPC error response 00:17:55.972 GoRPCClient: error on JSON-RPC call 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.972 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 2024/11/20 15:25:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:55.972 request: 00:17:55.972 { 00:17:55.972 "method": "bdev_nvme_attach_controller", 00:17:55.972 "params": { 00:17:55.972 "name": "NVMe0", 00:17:55.972 "trtype": "tcp", 00:17:55.973 "traddr": "10.0.0.3", 00:17:55.973 "adrfam": "ipv4", 00:17:55.973 "trsvcid": "4420", 00:17:55.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.973 "hostaddr": "10.0.0.1", 00:17:55.973 "prchk_reftag": false, 00:17:55.973 "prchk_guard": false, 00:17:55.973 "hdgst": false, 00:17:55.973 "ddgst": false, 00:17:55.973 "multipath": "failover", 00:17:55.973 "allow_unrecognized_csi": false 00:17:55.973 } 00:17:55.973 } 00:17:55.973 Got JSON-RPC error response 00:17:55.973 GoRPCClient: error on JSON-RPC call 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 NVMe0n1 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.973 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 00:17:56.232 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.232 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:56.232 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.232 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:56.232 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.232 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:56.232 15:25:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:57.168 { 00:17:57.168 "results": [ 00:17:57.168 { 00:17:57.168 "job": "NVMe0n1", 00:17:57.168 "core_mask": "0x1", 00:17:57.168 "workload": "write", 00:17:57.168 "status": "finished", 00:17:57.168 "queue_depth": 128, 00:17:57.168 "io_size": 4096, 00:17:57.168 "runtime": 1.008078, 00:17:57.168 "iops": 23817.601415763464, 00:17:57.168 "mibps": 93.03750553032603, 00:17:57.168 "io_failed": 0, 00:17:57.168 "io_timeout": 0, 00:17:57.168 "avg_latency_us": 5366.468759569724, 00:17:57.168 "min_latency_us": 2579.328514056225, 00:17:57.168 "max_latency_us": 11212.183132530121 00:17:57.168 } 00:17:57.168 ], 00:17:57.168 "core_count": 1 00:17:57.168 } 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 nvme1n1 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 nvme1n1 00:17:57.426 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86916 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86916 ']' 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86916 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.685 15:25:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86916 00:17:57.685 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.685 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.685 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86916' 00:17:57.685 killing process with pid 86916 00:17:57.685 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86916 00:17:57.685 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86916 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:17:57.944 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:57.944 [2024-11-20 15:25:41.158511] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:57.944 [2024-11-20 15:25:41.158637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86916 ] 00:17:57.944 [2024-11-20 15:25:41.310797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.944 [2024-11-20 15:25:41.393393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.944 [2024-11-20 15:25:42.482311] bdev.c:4906:bdev_name_add: *ERROR*: Bdev name cc17ce98-6cf0-4eec-be72-5dd036aa1dd2 already exists 00:17:57.944 [2024-11-20 15:25:42.482409] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:cc17ce98-6cf0-4eec-be72-5dd036aa1dd2 alias for bdev NVMe1n1 00:17:57.944 [2024-11-20 15:25:42.482429] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:57.944 Running I/O for 1 seconds... 00:17:57.944 23755.00 IOPS, 92.79 MiB/s 00:17:57.944 Latency(us) 00:17:57.944 [2024-11-20T15:25:44.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.944 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:57.944 NVMe0n1 : 1.01 23817.60 93.04 0.00 0.00 5366.47 2579.33 11212.18 00:17:57.944 [2024-11-20T15:25:44.426Z] =================================================================================================================== 00:17:57.944 [2024-11-20T15:25:44.426Z] Total : 23817.60 93.04 0.00 0.00 5366.47 2579.33 11212.18 00:17:57.944 Received shutdown signal, test time was about 1.000000 seconds 00:17:57.944 00:17:57.944 Latency(us) 00:17:57.944 [2024-11-20T15:25:44.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.944 [2024-11-20T15:25:44.426Z] =================================================================================================================== 00:17:57.944 [2024-11-20T15:25:44.426Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.944 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.944 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.944 rmmod nvme_tcp 00:17:57.944 rmmod nvme_fabrics 00:17:57.944 rmmod nvme_keyring 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86858 ']' 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86858 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86858 ']' 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86858 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86858 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:58.203 killing process with pid 86858 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86858' 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86858 00:17:58.203 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86858 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:58.461 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.719 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.719 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:58.719 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.719 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.719 15:25:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.719 15:25:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:17:58.719 00:17:58.719 real 0m6.098s 00:17:58.719 user 0m17.353s 00:17:58.719 sys 0m1.808s 00:17:58.719 ************************************ 00:17:58.719 END TEST nvmf_multicontroller 00:17:58.719 ************************************ 00:17:58.719 15:25:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.719 15:25:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 15:25:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:58.719 15:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.719 15:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.719 15:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 ************************************ 00:17:58.719 START TEST nvmf_aer 00:17:58.719 ************************************ 00:17:58.719 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:58.979 * Looking for test storage... 00:17:58.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.979 --rc genhtml_branch_coverage=1 00:17:58.979 --rc genhtml_function_coverage=1 00:17:58.979 --rc genhtml_legend=1 00:17:58.979 --rc geninfo_all_blocks=1 00:17:58.979 --rc geninfo_unexecuted_blocks=1 00:17:58.979 00:17:58.979 ' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.979 --rc genhtml_branch_coverage=1 00:17:58.979 --rc genhtml_function_coverage=1 00:17:58.979 --rc genhtml_legend=1 00:17:58.979 --rc geninfo_all_blocks=1 00:17:58.979 --rc geninfo_unexecuted_blocks=1 00:17:58.979 00:17:58.979 ' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.979 --rc genhtml_branch_coverage=1 00:17:58.979 --rc genhtml_function_coverage=1 00:17:58.979 --rc genhtml_legend=1 00:17:58.979 --rc geninfo_all_blocks=1 00:17:58.979 --rc geninfo_unexecuted_blocks=1 00:17:58.979 00:17:58.979 ' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.979 --rc genhtml_branch_coverage=1 00:17:58.979 --rc genhtml_function_coverage=1 00:17:58.979 --rc genhtml_legend=1 00:17:58.979 --rc geninfo_all_blocks=1 00:17:58.979 --rc geninfo_unexecuted_blocks=1 00:17:58.979 00:17:58.979 ' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.979 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:58.980 Cannot find device "nvmf_init_br" 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:58.980 Cannot find device "nvmf_init_br2" 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:58.980 Cannot find device "nvmf_tgt_br" 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.980 Cannot find device "nvmf_tgt_br2" 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:17:58.980 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:59.239 Cannot find device "nvmf_init_br" 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:59.239 Cannot find device "nvmf_init_br2" 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:59.239 Cannot find device "nvmf_tgt_br" 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:59.239 Cannot find device "nvmf_tgt_br2" 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:59.239 Cannot find device "nvmf_br" 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:59.239 Cannot find device "nvmf_init_if" 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:59.239 Cannot find device "nvmf_init_if2" 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:59.239 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:59.497 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:59.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:59.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:17:59.498 00:17:59.498 --- 10.0.0.3 ping statistics --- 00:17:59.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.498 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:59.498 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:59.498 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 00:17:59.498 00:17:59.498 --- 10.0.0.4 ping statistics --- 00:17:59.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.498 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:59.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:17:59.498 00:17:59.498 --- 10.0.0.1 ping statistics --- 00:17:59.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.498 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:59.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:17:59.498 00:17:59.498 --- 10.0.0.2 ping statistics --- 00:17:59.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.498 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=87229 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 87229 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 87229 ']' 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.498 15:25:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.498 [2024-11-20 15:25:45.947196] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:17:59.498 [2024-11-20 15:25:45.947299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.756 [2024-11-20 15:25:46.105550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.756 [2024-11-20 15:25:46.191475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.756 [2024-11-20 15:25:46.191549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.756 [2024-11-20 15:25:46.191559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.756 [2024-11-20 15:25:46.191568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.756 [2024-11-20 15:25:46.191575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.756 [2024-11-20 15:25:46.192912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.756 [2024-11-20 15:25:46.193052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.756 [2024-11-20 15:25:46.193136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.756 [2024-11-20 15:25:46.193219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:18:00.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.691 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.691 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.691 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 [2024-11-20 15:25:46.956188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.691 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.691 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:00.691 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.691 15:25:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 Malloc0 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 [2024-11-20 15:25:47.049142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 [ 00:18:00.691 { 00:18:00.691 "allow_any_host": true, 00:18:00.691 "hosts": [], 00:18:00.691 "listen_addresses": [], 00:18:00.691 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:00.691 "subtype": "Discovery" 00:18:00.691 }, 00:18:00.691 { 00:18:00.691 "allow_any_host": true, 00:18:00.691 "hosts": [], 00:18:00.691 "listen_addresses": [ 00:18:00.691 { 00:18:00.691 "adrfam": "IPv4", 00:18:00.691 "traddr": "10.0.0.3", 00:18:00.691 "trsvcid": "4420", 00:18:00.691 "trtype": "TCP" 00:18:00.691 } 00:18:00.691 ], 00:18:00.691 "max_cntlid": 65519, 00:18:00.691 "max_namespaces": 2, 00:18:00.691 "min_cntlid": 1, 00:18:00.691 "model_number": "SPDK bdev Controller", 00:18:00.691 "namespaces": [ 00:18:00.691 { 00:18:00.691 "bdev_name": "Malloc0", 00:18:00.691 "name": "Malloc0", 00:18:00.691 "nguid": "AE4AA2AB98564D6B9D50DEC861AC0C87", 00:18:00.691 "nsid": 1, 00:18:00.691 "uuid": "ae4aa2ab-9856-4d6b-9d50-dec861ac0c87" 00:18:00.691 } 00:18:00.691 ], 00:18:00.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.691 "serial_number": "SPDK00000000000001", 00:18:00.691 "subtype": "NVMe" 00:18:00.691 } 00:18:00.691 ] 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=87283 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:18:00.691 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.949 Malloc1 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:00.949 Asynchronous Event Request test 00:18:00.949 Attaching to 10.0.0.3 00:18:00.949 Attached to 10.0.0.3 00:18:00.949 Registering asynchronous event callbacks... 00:18:00.949 Starting namespace attribute notice tests for all controllers... 00:18:00.949 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:00.949 aer_cb - Changed Namespace 00:18:00.949 Cleaning up... 00:18:00.949 [ 00:18:00.949 { 00:18:00.949 "allow_any_host": true, 00:18:00.949 "hosts": [], 00:18:00.949 "listen_addresses": [], 00:18:00.949 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:00.949 "subtype": "Discovery" 00:18:00.949 }, 00:18:00.949 { 00:18:00.949 "allow_any_host": true, 00:18:00.949 "hosts": [], 00:18:00.949 "listen_addresses": [ 00:18:00.949 { 00:18:00.949 "adrfam": "IPv4", 00:18:00.949 "traddr": "10.0.0.3", 00:18:00.949 "trsvcid": "4420", 00:18:00.949 "trtype": "TCP" 00:18:00.949 } 00:18:00.949 ], 00:18:00.949 "max_cntlid": 65519, 00:18:00.949 "max_namespaces": 2, 00:18:00.949 "min_cntlid": 1, 00:18:00.949 "model_number": "SPDK bdev Controller", 00:18:00.949 "namespaces": [ 00:18:00.949 { 00:18:00.949 "bdev_name": "Malloc0", 00:18:00.949 "name": "Malloc0", 00:18:00.949 "nguid": "AE4AA2AB98564D6B9D50DEC861AC0C87", 00:18:00.949 "nsid": 1, 00:18:00.949 "uuid": "ae4aa2ab-9856-4d6b-9d50-dec861ac0c87" 00:18:00.949 }, 00:18:00.949 { 00:18:00.949 "bdev_name": "Malloc1", 00:18:00.949 "name": "Malloc1", 00:18:00.949 "nguid": "1830211FDE11496796EC0AC274F79CF5", 00:18:00.949 "nsid": 2, 00:18:00.949 "uuid": "1830211f-de11-4967-96ec-0ac274f79cf5" 00:18:00.949 } 00:18:00.949 ], 00:18:00.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.949 "serial_number": "SPDK00000000000001", 00:18:00.949 "subtype": "NVMe" 00:18:00.949 } 00:18:00.949 ] 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 87283 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.949 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.208 rmmod nvme_tcp 00:18:01.208 rmmod nvme_fabrics 00:18:01.208 rmmod nvme_keyring 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 87229 ']' 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 87229 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 87229 ']' 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 87229 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.208 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87229 00:18:01.488 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.488 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.488 killing process with pid 87229 00:18:01.488 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87229' 00:18:01.488 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 87229 00:18:01.488 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 87229 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:01.770 15:25:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.770 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.028 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:18:02.029 00:18:02.029 real 0m3.202s 00:18:02.029 user 0m7.378s 00:18:02.029 sys 0m1.122s 00:18:02.029 ************************************ 00:18:02.029 END TEST nvmf_aer 00:18:02.029 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.029 15:25:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:02.029 ************************************ 00:18:02.029 15:25:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:18:02.029 15:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.029 15:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.029 15:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.029 ************************************ 00:18:02.029 START TEST nvmf_async_init 00:18:02.029 ************************************ 00:18:02.029 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:18:02.029 * Looking for test storage... 00:18:02.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.289 --rc genhtml_branch_coverage=1 00:18:02.289 --rc genhtml_function_coverage=1 00:18:02.289 --rc genhtml_legend=1 00:18:02.289 --rc geninfo_all_blocks=1 00:18:02.289 --rc geninfo_unexecuted_blocks=1 00:18:02.289 00:18:02.289 ' 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.289 --rc genhtml_branch_coverage=1 00:18:02.289 --rc genhtml_function_coverage=1 00:18:02.289 --rc genhtml_legend=1 00:18:02.289 --rc geninfo_all_blocks=1 00:18:02.289 --rc geninfo_unexecuted_blocks=1 00:18:02.289 00:18:02.289 ' 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.289 --rc genhtml_branch_coverage=1 00:18:02.289 --rc genhtml_function_coverage=1 00:18:02.289 --rc genhtml_legend=1 00:18:02.289 --rc geninfo_all_blocks=1 00:18:02.289 --rc geninfo_unexecuted_blocks=1 00:18:02.289 00:18:02.289 ' 00:18:02.289 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.289 --rc genhtml_branch_coverage=1 00:18:02.289 --rc genhtml_function_coverage=1 00:18:02.289 --rc genhtml_legend=1 00:18:02.289 --rc geninfo_all_blocks=1 00:18:02.289 --rc geninfo_unexecuted_blocks=1 00:18:02.290 00:18:02.290 ' 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.290 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8271d43e12914b0ab56e9ff395042307 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.290 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.291 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.291 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:02.291 Cannot find device "nvmf_init_br" 00:18:02.291 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:18:02.291 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:02.291 Cannot find device "nvmf_init_br2" 00:18:02.291 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:18:02.291 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:02.291 Cannot find device "nvmf_tgt_br" 00:18:02.291 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:18:02.291 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.550 Cannot find device "nvmf_tgt_br2" 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:02.550 Cannot find device "nvmf_init_br" 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:02.550 Cannot find device "nvmf_init_br2" 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:02.550 Cannot find device "nvmf_tgt_br" 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:02.550 Cannot find device "nvmf_tgt_br2" 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:02.550 Cannot find device "nvmf_br" 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:02.550 Cannot find device "nvmf_init_if" 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:02.550 Cannot find device "nvmf_init_if2" 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.550 15:25:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.550 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:02.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 00:18:02.848 00:18:02.848 --- 10.0.0.3 ping statistics --- 00:18:02.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.848 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:02.848 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:02.848 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:18:02.848 00:18:02.848 --- 10.0.0.4 ping statistics --- 00:18:02.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.848 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:18:02.848 00:18:02.848 --- 10.0.0.1 ping statistics --- 00:18:02.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.848 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:02.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:02.848 00:18:02.848 --- 10.0.0.2 ping statistics --- 00:18:02.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.848 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=87523 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 87523 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 87523 ']' 00:18:02.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.848 15:25:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:03.105 [2024-11-20 15:25:49.384986] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:03.105 [2024-11-20 15:25:49.385095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.105 [2024-11-20 15:25:49.538511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.362 [2024-11-20 15:25:49.588355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.362 [2024-11-20 15:25:49.588411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.362 [2024-11-20 15:25:49.588421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.362 [2024-11-20 15:25:49.588430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.362 [2024-11-20 15:25:49.588437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.362 [2024-11-20 15:25:49.588747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:03.927 [2024-11-20 15:25:50.391986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.927 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.186 null0 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8271d43e12914b0ab56e9ff395042307 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.186 [2024-11-20 15:25:50.457313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.186 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.445 nvme0n1 00:18:04.445 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.445 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:04.445 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.445 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.445 [ 00:18:04.445 { 00:18:04.445 "aliases": [ 00:18:04.445 "8271d43e-1291-4b0a-b56e-9ff395042307" 00:18:04.445 ], 00:18:04.445 "assigned_rate_limits": { 00:18:04.445 "r_mbytes_per_sec": 0, 00:18:04.445 "rw_ios_per_sec": 0, 00:18:04.445 "rw_mbytes_per_sec": 0, 00:18:04.445 "w_mbytes_per_sec": 0 00:18:04.445 }, 00:18:04.445 "block_size": 512, 00:18:04.445 "claimed": false, 00:18:04.445 "driver_specific": { 00:18:04.445 "mp_policy": "active_passive", 00:18:04.445 "nvme": [ 00:18:04.445 { 00:18:04.445 "ctrlr_data": { 00:18:04.445 "ana_reporting": false, 00:18:04.445 "cntlid": 1, 00:18:04.445 "firmware_revision": "25.01", 00:18:04.445 "model_number": "SPDK bdev Controller", 00:18:04.445 "multi_ctrlr": true, 00:18:04.445 "oacs": { 00:18:04.445 "firmware": 0, 00:18:04.445 "format": 0, 00:18:04.445 "ns_manage": 0, 00:18:04.445 "security": 0 00:18:04.445 }, 00:18:04.445 "serial_number": "00000000000000000000", 00:18:04.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:04.445 "vendor_id": "0x8086" 00:18:04.445 }, 00:18:04.445 "ns_data": { 00:18:04.445 "can_share": true, 00:18:04.445 "id": 1 00:18:04.445 }, 00:18:04.445 "trid": { 00:18:04.445 "adrfam": "IPv4", 00:18:04.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:04.445 "traddr": "10.0.0.3", 00:18:04.445 "trsvcid": "4420", 00:18:04.445 "trtype": "TCP" 00:18:04.445 }, 00:18:04.445 "vs": { 00:18:04.445 "nvme_version": "1.3" 00:18:04.445 } 00:18:04.445 } 00:18:04.445 ] 00:18:04.445 }, 00:18:04.445 "memory_domains": [ 00:18:04.445 { 00:18:04.445 "dma_device_id": "system", 00:18:04.445 "dma_device_type": 1 00:18:04.445 } 00:18:04.445 ], 00:18:04.445 "name": "nvme0n1", 00:18:04.445 "num_blocks": 2097152, 00:18:04.445 "numa_id": -1, 00:18:04.445 "product_name": "NVMe disk", 00:18:04.445 "supported_io_types": { 00:18:04.445 "abort": true, 00:18:04.445 "compare": true, 00:18:04.445 "compare_and_write": true, 00:18:04.445 "copy": true, 00:18:04.445 "flush": true, 00:18:04.445 "get_zone_info": false, 00:18:04.445 "nvme_admin": true, 00:18:04.445 "nvme_io": true, 00:18:04.445 "nvme_io_md": false, 00:18:04.445 "nvme_iov_md": false, 00:18:04.445 "read": true, 00:18:04.445 "reset": true, 00:18:04.445 "seek_data": false, 00:18:04.445 "seek_hole": false, 00:18:04.445 "unmap": false, 00:18:04.445 "write": true, 00:18:04.445 "write_zeroes": true, 00:18:04.445 "zcopy": false, 00:18:04.445 "zone_append": false, 00:18:04.445 "zone_management": false 00:18:04.445 }, 00:18:04.445 "uuid": "8271d43e-1291-4b0a-b56e-9ff395042307", 00:18:04.445 "zoned": false 00:18:04.445 } 00:18:04.445 ] 00:18:04.445 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.446 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:04.446 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.446 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.446 [2024-11-20 15:25:50.760707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:04.446 [2024-11-20 15:25:50.760863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb734f0 (9): Bad file descriptor 00:18:04.446 [2024-11-20 15:25:50.892875] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:18:04.446 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.446 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:04.446 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.446 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.446 [ 00:18:04.446 { 00:18:04.446 "aliases": [ 00:18:04.446 "8271d43e-1291-4b0a-b56e-9ff395042307" 00:18:04.446 ], 00:18:04.446 "assigned_rate_limits": { 00:18:04.446 "r_mbytes_per_sec": 0, 00:18:04.446 "rw_ios_per_sec": 0, 00:18:04.446 "rw_mbytes_per_sec": 0, 00:18:04.446 "w_mbytes_per_sec": 0 00:18:04.446 }, 00:18:04.446 "block_size": 512, 00:18:04.446 "claimed": false, 00:18:04.446 "driver_specific": { 00:18:04.446 "mp_policy": "active_passive", 00:18:04.446 "nvme": [ 00:18:04.446 { 00:18:04.446 "ctrlr_data": { 00:18:04.446 "ana_reporting": false, 00:18:04.446 "cntlid": 2, 00:18:04.446 "firmware_revision": "25.01", 00:18:04.446 "model_number": "SPDK bdev Controller", 00:18:04.446 "multi_ctrlr": true, 00:18:04.446 "oacs": { 00:18:04.446 "firmware": 0, 00:18:04.446 "format": 0, 00:18:04.446 "ns_manage": 0, 00:18:04.446 "security": 0 00:18:04.446 }, 00:18:04.707 "serial_number": "00000000000000000000", 00:18:04.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:04.707 "vendor_id": "0x8086" 00:18:04.707 }, 00:18:04.707 "ns_data": { 00:18:04.707 "can_share": true, 00:18:04.707 "id": 1 00:18:04.707 }, 00:18:04.707 "trid": { 00:18:04.707 "adrfam": "IPv4", 00:18:04.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:04.707 "traddr": "10.0.0.3", 00:18:04.707 "trsvcid": "4420", 00:18:04.707 "trtype": "TCP" 00:18:04.707 }, 00:18:04.707 "vs": { 00:18:04.707 "nvme_version": "1.3" 00:18:04.707 } 00:18:04.707 } 00:18:04.707 ] 00:18:04.707 }, 00:18:04.707 "memory_domains": [ 00:18:04.707 { 00:18:04.707 "dma_device_id": "system", 00:18:04.707 "dma_device_type": 1 00:18:04.707 } 00:18:04.707 ], 00:18:04.707 "name": "nvme0n1", 00:18:04.707 "num_blocks": 2097152, 00:18:04.707 "numa_id": -1, 00:18:04.707 "product_name": "NVMe disk", 00:18:04.707 "supported_io_types": { 00:18:04.707 "abort": true, 00:18:04.707 "compare": true, 00:18:04.707 "compare_and_write": true, 00:18:04.707 "copy": true, 00:18:04.707 "flush": true, 00:18:04.707 "get_zone_info": false, 00:18:04.707 "nvme_admin": true, 00:18:04.707 "nvme_io": true, 00:18:04.707 "nvme_io_md": false, 00:18:04.707 "nvme_iov_md": false, 00:18:04.707 "read": true, 00:18:04.707 "reset": true, 00:18:04.707 "seek_data": false, 00:18:04.707 "seek_hole": false, 00:18:04.707 "unmap": false, 00:18:04.707 "write": true, 00:18:04.707 "write_zeroes": true, 00:18:04.707 "zcopy": false, 00:18:04.707 "zone_append": false, 00:18:04.707 "zone_management": false 00:18:04.707 }, 00:18:04.707 "uuid": "8271d43e-1291-4b0a-b56e-9ff395042307", 00:18:04.707 "zoned": false 00:18:04.707 } 00:18:04.707 ] 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.RR72JbYtpY 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.RR72JbYtpY 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.RR72JbYtpY 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.707 15:25:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 [2024-11-20 15:25:51.004444] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.707 [2024-11-20 15:25:51.004701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 [2024-11-20 15:25:51.028414] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.707 nvme0n1 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.707 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 [ 00:18:04.707 { 00:18:04.707 "aliases": [ 00:18:04.707 "8271d43e-1291-4b0a-b56e-9ff395042307" 00:18:04.707 ], 00:18:04.707 "assigned_rate_limits": { 00:18:04.707 "r_mbytes_per_sec": 0, 00:18:04.707 "rw_ios_per_sec": 0, 00:18:04.707 "rw_mbytes_per_sec": 0, 00:18:04.707 "w_mbytes_per_sec": 0 00:18:04.707 }, 00:18:04.707 "block_size": 512, 00:18:04.707 "claimed": false, 00:18:04.707 "driver_specific": { 00:18:04.707 "mp_policy": "active_passive", 00:18:04.707 "nvme": [ 00:18:04.708 { 00:18:04.708 "ctrlr_data": { 00:18:04.708 "ana_reporting": false, 00:18:04.708 "cntlid": 3, 00:18:04.708 "firmware_revision": "25.01", 00:18:04.708 "model_number": "SPDK bdev Controller", 00:18:04.708 "multi_ctrlr": true, 00:18:04.708 "oacs": { 00:18:04.708 "firmware": 0, 00:18:04.708 "format": 0, 00:18:04.708 "ns_manage": 0, 00:18:04.708 "security": 0 00:18:04.708 }, 00:18:04.708 "serial_number": "00000000000000000000", 00:18:04.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:04.708 "vendor_id": "0x8086" 00:18:04.708 }, 00:18:04.708 "ns_data": { 00:18:04.708 "can_share": true, 00:18:04.708 "id": 1 00:18:04.708 }, 00:18:04.708 "trid": { 00:18:04.708 "adrfam": "IPv4", 00:18:04.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:04.708 "traddr": "10.0.0.3", 00:18:04.708 "trsvcid": "4421", 00:18:04.708 "trtype": "TCP" 00:18:04.708 }, 00:18:04.708 "vs": { 00:18:04.708 "nvme_version": "1.3" 00:18:04.708 } 00:18:04.708 } 00:18:04.708 ] 00:18:04.708 }, 00:18:04.708 "memory_domains": [ 00:18:04.708 { 00:18:04.708 "dma_device_id": "system", 00:18:04.708 "dma_device_type": 1 00:18:04.708 } 00:18:04.708 ], 00:18:04.708 "name": "nvme0n1", 00:18:04.708 "num_blocks": 2097152, 00:18:04.708 "numa_id": -1, 00:18:04.708 "product_name": "NVMe disk", 00:18:04.708 "supported_io_types": { 00:18:04.708 "abort": true, 00:18:04.708 "compare": true, 00:18:04.708 "compare_and_write": true, 00:18:04.708 "copy": true, 00:18:04.708 "flush": true, 00:18:04.708 "get_zone_info": false, 00:18:04.708 "nvme_admin": true, 00:18:04.708 "nvme_io": true, 00:18:04.708 "nvme_io_md": false, 00:18:04.708 "nvme_iov_md": false, 00:18:04.708 "read": true, 00:18:04.708 "reset": true, 00:18:04.708 "seek_data": false, 00:18:04.708 "seek_hole": false, 00:18:04.708 "unmap": false, 00:18:04.708 "write": true, 00:18:04.708 "write_zeroes": true, 00:18:04.708 "zcopy": false, 00:18:04.708 "zone_append": false, 00:18:04.708 "zone_management": false 00:18:04.708 }, 00:18:04.708 "uuid": "8271d43e-1291-4b0a-b56e-9ff395042307", 00:18:04.708 "zoned": false 00:18:04.708 } 00:18:04.708 ] 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.RR72JbYtpY 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:04.708 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:18:04.967 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:04.967 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:18:04.967 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.967 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:04.967 rmmod nvme_tcp 00:18:04.968 rmmod nvme_fabrics 00:18:04.968 rmmod nvme_keyring 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 87523 ']' 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 87523 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 87523 ']' 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 87523 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87523 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.968 killing process with pid 87523 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87523' 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 87523 00:18:04.968 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 87523 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:05.226 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:18:05.485 00:18:05.485 real 0m3.542s 00:18:05.485 user 0m2.741s 00:18:05.485 sys 0m1.096s 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.485 ************************************ 00:18:05.485 15:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:05.485 END TEST nvmf_async_init 00:18:05.485 ************************************ 00:18:05.743 15:25:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:18:05.743 15:25:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:05.743 15:25:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.743 15:25:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.743 ************************************ 00:18:05.743 START TEST dma 00:18:05.743 ************************************ 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:18:05.743 * Looking for test storage... 00:18:05.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.743 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:06.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.003 --rc genhtml_branch_coverage=1 00:18:06.003 --rc genhtml_function_coverage=1 00:18:06.003 --rc genhtml_legend=1 00:18:06.003 --rc geninfo_all_blocks=1 00:18:06.003 --rc geninfo_unexecuted_blocks=1 00:18:06.003 00:18:06.003 ' 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:06.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.003 --rc genhtml_branch_coverage=1 00:18:06.003 --rc genhtml_function_coverage=1 00:18:06.003 --rc genhtml_legend=1 00:18:06.003 --rc geninfo_all_blocks=1 00:18:06.003 --rc geninfo_unexecuted_blocks=1 00:18:06.003 00:18:06.003 ' 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:06.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.003 --rc genhtml_branch_coverage=1 00:18:06.003 --rc genhtml_function_coverage=1 00:18:06.003 --rc genhtml_legend=1 00:18:06.003 --rc geninfo_all_blocks=1 00:18:06.003 --rc geninfo_unexecuted_blocks=1 00:18:06.003 00:18:06.003 ' 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:06.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.003 --rc genhtml_branch_coverage=1 00:18:06.003 --rc genhtml_function_coverage=1 00:18:06.003 --rc genhtml_legend=1 00:18:06.003 --rc geninfo_all_blocks=1 00:18:06.003 --rc geninfo_unexecuted_blocks=1 00:18:06.003 00:18:06.003 ' 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.003 15:25:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.004 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:18:06.004 00:18:06.004 real 0m0.286s 00:18:06.004 user 0m0.146s 00:18:06.004 sys 0m0.153s 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 ************************************ 00:18:06.004 END TEST dma 00:18:06.004 ************************************ 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 ************************************ 00:18:06.004 START TEST nvmf_identify 00:18:06.004 ************************************ 00:18:06.004 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:06.264 * Looking for test storage... 00:18:06.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:06.264 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:06.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.265 --rc genhtml_branch_coverage=1 00:18:06.265 --rc genhtml_function_coverage=1 00:18:06.265 --rc genhtml_legend=1 00:18:06.265 --rc geninfo_all_blocks=1 00:18:06.265 --rc geninfo_unexecuted_blocks=1 00:18:06.265 00:18:06.265 ' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:06.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.265 --rc genhtml_branch_coverage=1 00:18:06.265 --rc genhtml_function_coverage=1 00:18:06.265 --rc genhtml_legend=1 00:18:06.265 --rc geninfo_all_blocks=1 00:18:06.265 --rc geninfo_unexecuted_blocks=1 00:18:06.265 00:18:06.265 ' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:06.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.265 --rc genhtml_branch_coverage=1 00:18:06.265 --rc genhtml_function_coverage=1 00:18:06.265 --rc genhtml_legend=1 00:18:06.265 --rc geninfo_all_blocks=1 00:18:06.265 --rc geninfo_unexecuted_blocks=1 00:18:06.265 00:18:06.265 ' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:06.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.265 --rc genhtml_branch_coverage=1 00:18:06.265 --rc genhtml_function_coverage=1 00:18:06.265 --rc genhtml_legend=1 00:18:06.265 --rc geninfo_all_blocks=1 00:18:06.265 --rc geninfo_unexecuted_blocks=1 00:18:06.265 00:18:06.265 ' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:06.265 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:06.266 Cannot find device "nvmf_init_br" 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:06.266 Cannot find device "nvmf_init_br2" 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:06.266 Cannot find device "nvmf_tgt_br" 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.266 Cannot find device "nvmf_tgt_br2" 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:06.266 Cannot find device "nvmf_init_br" 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:18:06.266 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:06.525 Cannot find device "nvmf_init_br2" 00:18:06.525 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:06.526 Cannot find device "nvmf_tgt_br" 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:06.526 Cannot find device "nvmf_tgt_br2" 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:06.526 Cannot find device "nvmf_br" 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:06.526 Cannot find device "nvmf_init_if" 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:06.526 Cannot find device "nvmf_init_if2" 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:06.526 15:25:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:06.526 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:06.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:06.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:18:06.794 00:18:06.794 --- 10.0.0.3 ping statistics --- 00:18:06.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.794 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:06.794 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:06.794 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.156 ms 00:18:06.794 00:18:06.794 --- 10.0.0.4 ping statistics --- 00:18:06.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.794 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:06.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:18:06.794 00:18:06.794 --- 10.0.0.1 ping statistics --- 00:18:06.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.794 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:06.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:18:06.794 00:18:06.794 --- 10.0.0.2 ping statistics --- 00:18:06.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.794 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:06.794 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87860 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87860 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 87860 ']' 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:07.070 15:25:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.070 [2024-11-20 15:25:53.350906] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:07.070 [2024-11-20 15:25:53.351020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.070 [2024-11-20 15:25:53.510028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.329 [2024-11-20 15:25:53.596216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.329 [2024-11-20 15:25:53.596302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.329 [2024-11-20 15:25:53.596312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.329 [2024-11-20 15:25:53.596322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.329 [2024-11-20 15:25:53.596329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.329 [2024-11-20 15:25:53.597761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.329 [2024-11-20 15:25:53.597874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.329 [2024-11-20 15:25:53.597962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.329 [2024-11-20 15:25:53.597967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:07.898 [2024-11-20 15:25:54.296269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.898 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:08.157 Malloc0 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:08.157 [2024-11-20 15:25:54.444337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:08.157 [ 00:18:08.157 { 00:18:08.157 "allow_any_host": true, 00:18:08.157 "hosts": [], 00:18:08.157 "listen_addresses": [ 00:18:08.157 { 00:18:08.157 "adrfam": "IPv4", 00:18:08.157 "traddr": "10.0.0.3", 00:18:08.157 "trsvcid": "4420", 00:18:08.157 "trtype": "TCP" 00:18:08.157 } 00:18:08.157 ], 00:18:08.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:08.157 "subtype": "Discovery" 00:18:08.157 }, 00:18:08.157 { 00:18:08.157 "allow_any_host": true, 00:18:08.157 "hosts": [], 00:18:08.157 "listen_addresses": [ 00:18:08.157 { 00:18:08.157 "adrfam": "IPv4", 00:18:08.157 "traddr": "10.0.0.3", 00:18:08.157 "trsvcid": "4420", 00:18:08.157 "trtype": "TCP" 00:18:08.157 } 00:18:08.157 ], 00:18:08.157 "max_cntlid": 65519, 00:18:08.157 "max_namespaces": 32, 00:18:08.157 "min_cntlid": 1, 00:18:08.157 "model_number": "SPDK bdev Controller", 00:18:08.157 "namespaces": [ 00:18:08.157 { 00:18:08.157 "bdev_name": "Malloc0", 00:18:08.157 "eui64": "ABCDEF0123456789", 00:18:08.157 "name": "Malloc0", 00:18:08.157 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:08.157 "nsid": 1, 00:18:08.157 "uuid": "78706779-9c7b-45ce-85e2-09e6a65e467f" 00:18:08.157 } 00:18:08.157 ], 00:18:08.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.157 "serial_number": "SPDK00000000000001", 00:18:08.157 "subtype": "NVMe" 00:18:08.157 } 00:18:08.157 ] 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.157 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:08.157 [2024-11-20 15:25:54.529894] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:08.157 [2024-11-20 15:25:54.529962] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87919 ] 00:18:08.420 [2024-11-20 15:25:54.678766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:18:08.420 [2024-11-20 15:25:54.678860] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:08.420 [2024-11-20 15:25:54.678866] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:08.420 [2024-11-20 15:25:54.678887] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:08.420 [2024-11-20 15:25:54.678901] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:08.420 [2024-11-20 15:25:54.679350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:18:08.420 [2024-11-20 15:25:54.679420] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7f2d90 0 00:18:08.420 [2024-11-20 15:25:54.685707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:08.420 [2024-11-20 15:25:54.685736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:08.420 [2024-11-20 15:25:54.685742] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:08.420 [2024-11-20 15:25:54.685746] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:08.420 [2024-11-20 15:25:54.685787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.685793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.685798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.420 [2024-11-20 15:25:54.685816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:08.420 [2024-11-20 15:25:54.685853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.420 [2024-11-20 15:25:54.693700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.420 [2024-11-20 15:25:54.693725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.420 [2024-11-20 15:25:54.693730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.693736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.420 [2024-11-20 15:25:54.693752] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:08.420 [2024-11-20 15:25:54.693764] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:18:08.420 [2024-11-20 15:25:54.693771] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:18:08.420 [2024-11-20 15:25:54.693797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.693802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.693806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.420 [2024-11-20 15:25:54.693818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.420 [2024-11-20 15:25:54.693856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.420 [2024-11-20 15:25:54.693932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.420 [2024-11-20 15:25:54.693938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.420 [2024-11-20 15:25:54.693941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.693946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.420 [2024-11-20 15:25:54.693952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:18:08.420 [2024-11-20 15:25:54.693959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:18:08.420 [2024-11-20 15:25:54.693966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.693970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.693974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.420 [2024-11-20 15:25:54.693981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.420 [2024-11-20 15:25:54.693996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.420 [2024-11-20 15:25:54.694041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.420 [2024-11-20 15:25:54.694047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.420 [2024-11-20 15:25:54.694051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.420 [2024-11-20 15:25:54.694061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:18:08.420 [2024-11-20 15:25:54.694069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:08.420 [2024-11-20 15:25:54.694076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.420 [2024-11-20 15:25:54.694090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.420 [2024-11-20 15:25:54.694105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.420 [2024-11-20 15:25:54.694145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.420 [2024-11-20 15:25:54.694151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.420 [2024-11-20 15:25:54.694155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.420 [2024-11-20 15:25:54.694164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:08.420 [2024-11-20 15:25:54.694173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.420 [2024-11-20 15:25:54.694187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.420 [2024-11-20 15:25:54.694201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.420 [2024-11-20 15:25:54.694258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.420 [2024-11-20 15:25:54.694264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.420 [2024-11-20 15:25:54.694268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.420 [2024-11-20 15:25:54.694276] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:08.420 [2024-11-20 15:25:54.694282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:08.420 [2024-11-20 15:25:54.694290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:08.420 [2024-11-20 15:25:54.694403] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:18:08.420 [2024-11-20 15:25:54.694408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:08.420 [2024-11-20 15:25:54.694418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.420 [2024-11-20 15:25:54.694432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.420 [2024-11-20 15:25:54.694448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.420 [2024-11-20 15:25:54.694507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.420 [2024-11-20 15:25:54.694513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.420 [2024-11-20 15:25:54.694516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.420 [2024-11-20 15:25:54.694525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:08.420 [2024-11-20 15:25:54.694534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.420 [2024-11-20 15:25:54.694542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.420 [2024-11-20 15:25:54.694548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.420 [2024-11-20 15:25:54.694561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.420 [2024-11-20 15:25:54.694601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.420 [2024-11-20 15:25:54.694607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.420 [2024-11-20 15:25:54.694611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.421 [2024-11-20 15:25:54.694619] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:08.421 [2024-11-20 15:25:54.694625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:08.421 [2024-11-20 15:25:54.694633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:18:08.421 [2024-11-20 15:25:54.694651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:08.421 [2024-11-20 15:25:54.694664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.694686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-11-20 15:25:54.694701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.421 [2024-11-20 15:25:54.694792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.421 [2024-11-20 15:25:54.694798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.421 [2024-11-20 15:25:54.694802] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694806] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f2d90): datao=0, datal=4096, cccid=0 00:18:08.421 [2024-11-20 15:25:54.694812] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x833600) on tqpair(0x7f2d90): expected_datao=0, payload_size=4096 00:18:08.421 [2024-11-20 15:25:54.694817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694826] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694831] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.421 [2024-11-20 15:25:54.694846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.421 [2024-11-20 15:25:54.694849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.421 [2024-11-20 15:25:54.694864] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:18:08.421 [2024-11-20 15:25:54.694870] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:18:08.421 [2024-11-20 15:25:54.694874] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:18:08.421 [2024-11-20 15:25:54.694880] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:18:08.421 [2024-11-20 15:25:54.694885] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:18:08.421 [2024-11-20 15:25:54.694890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:18:08.421 [2024-11-20 15:25:54.694905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:08.421 [2024-11-20 15:25:54.694912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.694920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.694926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.421 [2024-11-20 15:25:54.694942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.421 [2024-11-20 15:25:54.694996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.421 [2024-11-20 15:25:54.695001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.421 [2024-11-20 15:25:54.695005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.421 [2024-11-20 15:25:54.695017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.695031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.421 [2024-11-20 15:25:54.695038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.695052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.421 [2024-11-20 15:25:54.695058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.695070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.421 [2024-11-20 15:25:54.695077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.695089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.421 [2024-11-20 15:25:54.695095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:08.421 [2024-11-20 15:25:54.695106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:08.421 [2024-11-20 15:25:54.695113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.695123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-11-20 15:25:54.695138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833600, cid 0, qid 0 00:18:08.421 [2024-11-20 15:25:54.695144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833780, cid 1, qid 0 00:18:08.421 [2024-11-20 15:25:54.695148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833900, cid 2, qid 0 00:18:08.421 [2024-11-20 15:25:54.695153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.421 [2024-11-20 15:25:54.695158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833c00, cid 4, qid 0 00:18:08.421 [2024-11-20 15:25:54.695231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.421 [2024-11-20 15:25:54.695237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.421 [2024-11-20 15:25:54.695240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833c00) on tqpair=0x7f2d90 00:18:08.421 [2024-11-20 15:25:54.695250] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:18:08.421 [2024-11-20 15:25:54.695256] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:18:08.421 [2024-11-20 15:25:54.695265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.695275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-11-20 15:25:54.695290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833c00, cid 4, qid 0 00:18:08.421 [2024-11-20 15:25:54.695339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.421 [2024-11-20 15:25:54.695345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.421 [2024-11-20 15:25:54.695348] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695352] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f2d90): datao=0, datal=4096, cccid=4 00:18:08.421 [2024-11-20 15:25:54.695357] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x833c00) on tqpair(0x7f2d90): expected_datao=0, payload_size=4096 00:18:08.421 [2024-11-20 15:25:54.695361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695368] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695372] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.421 [2024-11-20 15:25:54.695385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.421 [2024-11-20 15:25:54.695389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833c00) on tqpair=0x7f2d90 00:18:08.421 [2024-11-20 15:25:54.695407] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:18:08.421 [2024-11-20 15:25:54.695451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.695461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-11-20 15:25:54.695469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.421 [2024-11-20 15:25:54.695476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7f2d90) 00:18:08.421 [2024-11-20 15:25:54.695482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.421 [2024-11-20 15:25:54.695506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833c00, cid 4, qid 0 00:18:08.421 [2024-11-20 15:25:54.695511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833d80, cid 5, qid 0 00:18:08.421 [2024-11-20 15:25:54.695614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.422 [2024-11-20 15:25:54.695620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.422 [2024-11-20 15:25:54.695623] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.695627] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f2d90): datao=0, datal=1024, cccid=4 00:18:08.422 [2024-11-20 15:25:54.695632] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x833c00) on tqpair(0x7f2d90): expected_datao=0, payload_size=1024 00:18:08.422 [2024-11-20 15:25:54.695637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.695643] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.695646] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.695652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.422 [2024-11-20 15:25:54.695657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.422 [2024-11-20 15:25:54.695661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.695665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833d80) on tqpair=0x7f2d90 00:18:08.422 [2024-11-20 15:25:54.736796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.422 [2024-11-20 15:25:54.736840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.422 [2024-11-20 15:25:54.736845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.736850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833c00) on tqpair=0x7f2d90 00:18:08.422 [2024-11-20 15:25:54.736878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.736883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f2d90) 00:18:08.422 [2024-11-20 15:25:54.736897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.422 [2024-11-20 15:25:54.736944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833c00, cid 4, qid 0 00:18:08.422 [2024-11-20 15:25:54.737037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.422 [2024-11-20 15:25:54.737043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.422 [2024-11-20 15:25:54.737047] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737052] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f2d90): datao=0, datal=3072, cccid=4 00:18:08.422 [2024-11-20 15:25:54.737057] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x833c00) on tqpair(0x7f2d90): expected_datao=0, payload_size=3072 00:18:08.422 [2024-11-20 15:25:54.737062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737071] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737075] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.422 [2024-11-20 15:25:54.737089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.422 [2024-11-20 15:25:54.737092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833c00) on tqpair=0x7f2d90 00:18:08.422 [2024-11-20 15:25:54.737106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f2d90) 00:18:08.422 [2024-11-20 15:25:54.737116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.422 [2024-11-20 15:25:54.737138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833c00, cid 4, qid 0 00:18:08.422 [2024-11-20 15:25:54.737193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.422 [2024-11-20 15:25:54.737199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.422 [2024-11-20 15:25:54.737203] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737207] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f2d90): datao=0, datal=8, cccid=4 00:18:08.422 [2024-11-20 15:25:54.737211] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x833c00) on tqpair(0x7f2d90): expected_datao=0, payload_size=8 00:18:08.422 [2024-11-20 15:25:54.737216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737222] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.737226] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.422 ===================================================== 00:18:08.422 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:08.422 ===================================================== 00:18:08.422 Controller Capabilities/Features 00:18:08.422 ================================ 00:18:08.422 Vendor ID: 0000 00:18:08.422 Subsystem Vendor ID: 0000 00:18:08.422 Serial Number: .................... 00:18:08.422 Model Number: ........................................ 00:18:08.422 Firmware Version: 25.01 00:18:08.422 Recommended Arb Burst: 0 00:18:08.422 IEEE OUI Identifier: 00 00 00 00:18:08.422 Multi-path I/O 00:18:08.422 May have multiple subsystem ports: No 00:18:08.422 May have multiple controllers: No 00:18:08.422 Associated with SR-IOV VF: No 00:18:08.422 Max Data Transfer Size: 131072 00:18:08.422 Max Number of Namespaces: 0 00:18:08.422 Max Number of I/O Queues: 1024 00:18:08.422 NVMe Specification Version (VS): 1.3 00:18:08.422 NVMe Specification Version (Identify): 1.3 00:18:08.422 Maximum Queue Entries: 128 00:18:08.422 Contiguous Queues Required: Yes 00:18:08.422 Arbitration Mechanisms Supported 00:18:08.422 Weighted Round Robin: Not Supported 00:18:08.422 Vendor Specific: Not Supported 00:18:08.422 Reset Timeout: 15000 ms 00:18:08.422 Doorbell Stride: 4 bytes 00:18:08.422 NVM Subsystem Reset: Not Supported 00:18:08.422 Command Sets Supported 00:18:08.422 NVM Command Set: Supported 00:18:08.422 Boot Partition: Not Supported 00:18:08.422 Memory Page Size Minimum: 4096 bytes 00:18:08.422 Memory Page Size Maximum: 4096 bytes 00:18:08.422 Persistent Memory Region: Not Supported 00:18:08.422 Optional Asynchronous Events Supported 00:18:08.422 Namespace Attribute Notices: Not Supported 00:18:08.422 Firmware Activation Notices: Not Supported 00:18:08.422 ANA Change Notices: Not Supported 00:18:08.422 PLE Aggregate Log Change Notices: Not Supported 00:18:08.422 LBA Status Info Alert Notices: Not Supported 00:18:08.422 EGE Aggregate Log Change Notices: Not Supported 00:18:08.422 Normal NVM Subsystem Shutdown event: Not Supported 00:18:08.422 Zone Descriptor Change Notices: Not Supported 00:18:08.422 Discovery Log Change Notices: Supported 00:18:08.422 Controller Attributes 00:18:08.422 128-bit Host Identifier: Not Supported 00:18:08.422 Non-Operational Permissive Mode: Not Supported 00:18:08.422 NVM Sets: Not Supported 00:18:08.422 Read Recovery Levels: Not Supported 00:18:08.422 Endurance Groups: Not Supported 00:18:08.422 Predictable Latency Mode: Not Supported 00:18:08.422 Traffic Based Keep ALive: Not Supported 00:18:08.422 Namespace Granularity: Not Supported 00:18:08.422 SQ Associations: Not Supported 00:18:08.422 UUID List: Not Supported 00:18:08.422 Multi-Domain Subsystem: Not Supported 00:18:08.422 Fixed Capacity Management: Not Supported 00:18:08.422 Variable Capacity Management: Not Supported 00:18:08.422 Delete Endurance Group: Not Supported 00:18:08.422 Delete NVM Set: Not Supported 00:18:08.422 Extended LBA Formats Supported: Not Supported 00:18:08.422 Flexible Data Placement Supported: Not Supported 00:18:08.422 00:18:08.422 Controller Memory Buffer Support 00:18:08.422 ================================ 00:18:08.422 Supported: No 00:18:08.422 00:18:08.422 Persistent Memory Region Support 00:18:08.422 ================================ 00:18:08.422 Supported: No 00:18:08.422 00:18:08.422 Admin Command Set Attributes 00:18:08.422 ============================ 00:18:08.422 Security Send/Receive: Not Supported 00:18:08.422 Format NVM: Not Supported 00:18:08.422 Firmware Activate/Download: Not Supported 00:18:08.422 Namespace Management: Not Supported 00:18:08.422 Device Self-Test: Not Supported 00:18:08.422 Directives: Not Supported 00:18:08.422 NVMe-MI: Not Supported 00:18:08.422 Virtualization Management: Not Supported 00:18:08.422 Doorbell Buffer Config: Not Supported 00:18:08.422 Get LBA Status Capability: Not Supported 00:18:08.422 Command & Feature Lockdown Capability: Not Supported 00:18:08.422 Abort Command Limit: 1 00:18:08.422 Async Event Request Limit: 4 00:18:08.422 Number of Firmware Slots: N/A 00:18:08.422 Firmware Slot 1 Read-Only: N/A 00:18:08.422 Firm[2024-11-20 15:25:54.780739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.422 [2024-11-20 15:25:54.780792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.422 [2024-11-20 15:25:54.780798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.422 [2024-11-20 15:25:54.780804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833c00) on tqpair=0x7f2d90 00:18:08.422 ware Activation Without Reset: N/A 00:18:08.422 Multiple Update Detection Support: N/A 00:18:08.422 Firmware Update Granularity: No Information Provided 00:18:08.422 Per-Namespace SMART Log: No 00:18:08.422 Asymmetric Namespace Access Log Page: Not Supported 00:18:08.422 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:08.422 Command Effects Log Page: Not Supported 00:18:08.422 Get Log Page Extended Data: Supported 00:18:08.422 Telemetry Log Pages: Not Supported 00:18:08.422 Persistent Event Log Pages: Not Supported 00:18:08.422 Supported Log Pages Log Page: May Support 00:18:08.422 Commands Supported & Effects Log Page: Not Supported 00:18:08.422 Feature Identifiers & Effects Log Page:May Support 00:18:08.422 NVMe-MI Commands & Effects Log Page: May Support 00:18:08.423 Data Area 4 for Telemetry Log: Not Supported 00:18:08.423 Error Log Page Entries Supported: 128 00:18:08.423 Keep Alive: Not Supported 00:18:08.423 00:18:08.423 NVM Command Set Attributes 00:18:08.423 ========================== 00:18:08.423 Submission Queue Entry Size 00:18:08.423 Max: 1 00:18:08.423 Min: 1 00:18:08.423 Completion Queue Entry Size 00:18:08.423 Max: 1 00:18:08.423 Min: 1 00:18:08.423 Number of Namespaces: 0 00:18:08.423 Compare Command: Not Supported 00:18:08.423 Write Uncorrectable Command: Not Supported 00:18:08.423 Dataset Management Command: Not Supported 00:18:08.423 Write Zeroes Command: Not Supported 00:18:08.423 Set Features Save Field: Not Supported 00:18:08.423 Reservations: Not Supported 00:18:08.423 Timestamp: Not Supported 00:18:08.423 Copy: Not Supported 00:18:08.423 Volatile Write Cache: Not Present 00:18:08.423 Atomic Write Unit (Normal): 1 00:18:08.423 Atomic Write Unit (PFail): 1 00:18:08.423 Atomic Compare & Write Unit: 1 00:18:08.423 Fused Compare & Write: Supported 00:18:08.423 Scatter-Gather List 00:18:08.423 SGL Command Set: Supported 00:18:08.423 SGL Keyed: Supported 00:18:08.423 SGL Bit Bucket Descriptor: Not Supported 00:18:08.423 SGL Metadata Pointer: Not Supported 00:18:08.423 Oversized SGL: Not Supported 00:18:08.423 SGL Metadata Address: Not Supported 00:18:08.423 SGL Offset: Supported 00:18:08.423 Transport SGL Data Block: Not Supported 00:18:08.423 Replay Protected Memory Block: Not Supported 00:18:08.423 00:18:08.423 Firmware Slot Information 00:18:08.423 ========================= 00:18:08.423 Active slot: 0 00:18:08.423 00:18:08.423 00:18:08.423 Error Log 00:18:08.423 ========= 00:18:08.423 00:18:08.423 Active Namespaces 00:18:08.423 ================= 00:18:08.423 Discovery Log Page 00:18:08.423 ================== 00:18:08.423 Generation Counter: 2 00:18:08.423 Number of Records: 2 00:18:08.423 Record Format: 0 00:18:08.423 00:18:08.423 Discovery Log Entry 0 00:18:08.423 ---------------------- 00:18:08.423 Transport Type: 3 (TCP) 00:18:08.423 Address Family: 1 (IPv4) 00:18:08.423 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:08.423 Entry Flags: 00:18:08.423 Duplicate Returned Information: 1 00:18:08.423 Explicit Persistent Connection Support for Discovery: 1 00:18:08.423 Transport Requirements: 00:18:08.423 Secure Channel: Not Required 00:18:08.423 Port ID: 0 (0x0000) 00:18:08.423 Controller ID: 65535 (0xffff) 00:18:08.423 Admin Max SQ Size: 128 00:18:08.423 Transport Service Identifier: 4420 00:18:08.423 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:08.423 Transport Address: 10.0.0.3 00:18:08.423 Discovery Log Entry 1 00:18:08.423 ---------------------- 00:18:08.423 Transport Type: 3 (TCP) 00:18:08.423 Address Family: 1 (IPv4) 00:18:08.423 Subsystem Type: 2 (NVM Subsystem) 00:18:08.423 Entry Flags: 00:18:08.423 Duplicate Returned Information: 0 00:18:08.423 Explicit Persistent Connection Support for Discovery: 0 00:18:08.423 Transport Requirements: 00:18:08.423 Secure Channel: Not Required 00:18:08.423 Port ID: 0 (0x0000) 00:18:08.423 Controller ID: 65535 (0xffff) 00:18:08.423 Admin Max SQ Size: 128 00:18:08.423 Transport Service Identifier: 4420 00:18:08.423 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:08.423 Transport Address: 10.0.0.3 [2024-11-20 15:25:54.780985] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:18:08.423 [2024-11-20 15:25:54.781001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833600) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.423 [2024-11-20 15:25:54.781017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833780) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.423 [2024-11-20 15:25:54.781028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833900) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.423 [2024-11-20 15:25:54.781039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.423 [2024-11-20 15:25:54.781059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.423 [2024-11-20 15:25:54.781081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.423 [2024-11-20 15:25:54.781113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.423 [2024-11-20 15:25:54.781207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.423 [2024-11-20 15:25:54.781213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.423 [2024-11-20 15:25:54.781217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.423 [2024-11-20 15:25:54.781243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.423 [2024-11-20 15:25:54.781262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.423 [2024-11-20 15:25:54.781344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.423 [2024-11-20 15:25:54.781350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.423 [2024-11-20 15:25:54.781354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781363] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:18:08.423 [2024-11-20 15:25:54.781369] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:18:08.423 [2024-11-20 15:25:54.781378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.423 [2024-11-20 15:25:54.781392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.423 [2024-11-20 15:25:54.781407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.423 [2024-11-20 15:25:54.781449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.423 [2024-11-20 15:25:54.781455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.423 [2024-11-20 15:25:54.781459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.423 [2024-11-20 15:25:54.781487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.423 [2024-11-20 15:25:54.781501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.423 [2024-11-20 15:25:54.781541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.423 [2024-11-20 15:25:54.781547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.423 [2024-11-20 15:25:54.781551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.423 [2024-11-20 15:25:54.781577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.423 [2024-11-20 15:25:54.781590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.423 [2024-11-20 15:25:54.781635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.423 [2024-11-20 15:25:54.781641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.423 [2024-11-20 15:25:54.781645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.423 [2024-11-20 15:25:54.781658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.423 [2024-11-20 15:25:54.781662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.781672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.781698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.781742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.781748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.781752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.781764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.781778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.781792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.781835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.781841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.781844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.781857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.781871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.781884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.781924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.781930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.781934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.781946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.781954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.781960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.781973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.782016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.782022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.782025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.782038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.782052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.782065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.782108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.782114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.782117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.782130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.782144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.782157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.782200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.782205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.782209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.782229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.782243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.782256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.782299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.782305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.782308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.782321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.782335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.782349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.782389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.782395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.782398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.782411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.782425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.782438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.782481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.782487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.424 [2024-11-20 15:25:54.782491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.424 [2024-11-20 15:25:54.782503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.424 [2024-11-20 15:25:54.782511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.424 [2024-11-20 15:25:54.782517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.424 [2024-11-20 15:25:54.782530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.424 [2024-11-20 15:25:54.782571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.424 [2024-11-20 15:25:54.782577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.782580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.782593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.782607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.782620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.782660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.782666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.782670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.782689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.782703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.782717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.782762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.782768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.782772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.782784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.782798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.782812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.782853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.782859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.782863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.782875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.782889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.782903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.782947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.782952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.782956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.782968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.782976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.782982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.782996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.783039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.783045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.783048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.783061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.783075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.783088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.783128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.783134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.783137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.783150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.783164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.783177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.783217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.783223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.783226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.783238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.783252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.783266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.783308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.783314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.783318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.783330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.783344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.783357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.783397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.783403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.783407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.783419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.783433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.783447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.783485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.783491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.783494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.783507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.783521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.783534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.783574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.425 [2024-11-20 15:25:54.783580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.425 [2024-11-20 15:25:54.783583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.425 [2024-11-20 15:25:54.783595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.425 [2024-11-20 15:25:54.783603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.425 [2024-11-20 15:25:54.783610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.425 [2024-11-20 15:25:54.783623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.425 [2024-11-20 15:25:54.783665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.783671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.783682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.783694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.783708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.783722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.783768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.783774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.783777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.783790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.783804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.783817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.783860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.783866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.783870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.783882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.783896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.783910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.783955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.783961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.783964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.783977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.783984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.783991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.784004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.784047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.784053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.784056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.784068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.784083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.784096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.784139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.784145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.784149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.784161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.784175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.784188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.784241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.784247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.784251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.784263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.784277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.784291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.784334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.784340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.784344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.784356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.784370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.784383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.784429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.784435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.784438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.784451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.784465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.784478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.784523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.784528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.784532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.784544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.784558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.784572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.784613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.784619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.784623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.784635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.784643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.426 [2024-11-20 15:25:54.784649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.426 [2024-11-20 15:25:54.784662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.426 [2024-11-20 15:25:54.788698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.426 [2024-11-20 15:25:54.788719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.426 [2024-11-20 15:25:54.788724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.788728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.426 [2024-11-20 15:25:54.788741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.788746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.426 [2024-11-20 15:25:54.788750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f2d90) 00:18:08.427 [2024-11-20 15:25:54.788759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.427 [2024-11-20 15:25:54.788793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x833a80, cid 3, qid 0 00:18:08.427 [2024-11-20 15:25:54.788843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.427 [2024-11-20 15:25:54.788849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.427 [2024-11-20 15:25:54.788852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.427 [2024-11-20 15:25:54.788856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x833a80) on tqpair=0x7f2d90 00:18:08.427 [2024-11-20 15:25:54.788864] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:18:08.427 00:18:08.427 15:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:08.427 [2024-11-20 15:25:54.834838] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:08.427 [2024-11-20 15:25:54.834903] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87921 ] 00:18:08.688 [2024-11-20 15:25:54.986814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:18:08.689 [2024-11-20 15:25:54.986898] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:08.689 [2024-11-20 15:25:54.986904] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:08.689 [2024-11-20 15:25:54.986924] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:08.689 [2024-11-20 15:25:54.986936] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:08.689 [2024-11-20 15:25:54.987364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:18:08.689 [2024-11-20 15:25:54.987425] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x121bd90 0 00:18:08.689 [2024-11-20 15:25:54.994704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:08.689 [2024-11-20 15:25:54.994732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:08.689 [2024-11-20 15:25:54.994738] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:08.689 [2024-11-20 15:25:54.994743] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:08.689 [2024-11-20 15:25:54.994787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:54.994793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:54.994798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.689 [2024-11-20 15:25:54.994816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:08.689 [2024-11-20 15:25:54.994852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.689 [2024-11-20 15:25:55.002701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.689 [2024-11-20 15:25:55.002724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.689 [2024-11-20 15:25:55.002730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.002735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.689 [2024-11-20 15:25:55.002752] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:08.689 [2024-11-20 15:25:55.002764] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:18:08.689 [2024-11-20 15:25:55.002772] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:18:08.689 [2024-11-20 15:25:55.002796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.002802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.002806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.689 [2024-11-20 15:25:55.002818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-20 15:25:55.002858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.689 [2024-11-20 15:25:55.002934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.689 [2024-11-20 15:25:55.002940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.689 [2024-11-20 15:25:55.002944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.002948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.689 [2024-11-20 15:25:55.002953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:18:08.689 [2024-11-20 15:25:55.002961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:18:08.689 [2024-11-20 15:25:55.002968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.002972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.002976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.689 [2024-11-20 15:25:55.002982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-20 15:25:55.002998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.689 [2024-11-20 15:25:55.003040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.689 [2024-11-20 15:25:55.003046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.689 [2024-11-20 15:25:55.003050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.689 [2024-11-20 15:25:55.003060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:18:08.689 [2024-11-20 15:25:55.003068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:08.689 [2024-11-20 15:25:55.003075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.689 [2024-11-20 15:25:55.003089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-20 15:25:55.003102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.689 [2024-11-20 15:25:55.003147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.689 [2024-11-20 15:25:55.003153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.689 [2024-11-20 15:25:55.003157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.689 [2024-11-20 15:25:55.003166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:08.689 [2024-11-20 15:25:55.003175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.689 [2024-11-20 15:25:55.003189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-20 15:25:55.003203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.689 [2024-11-20 15:25:55.003246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.689 [2024-11-20 15:25:55.003252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.689 [2024-11-20 15:25:55.003256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.689 [2024-11-20 15:25:55.003265] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:08.689 [2024-11-20 15:25:55.003270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:08.689 [2024-11-20 15:25:55.003278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:08.689 [2024-11-20 15:25:55.003390] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:18:08.689 [2024-11-20 15:25:55.003396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:08.689 [2024-11-20 15:25:55.003405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.689 [2024-11-20 15:25:55.003420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-20 15:25:55.003435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.689 [2024-11-20 15:25:55.003478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.689 [2024-11-20 15:25:55.003483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.689 [2024-11-20 15:25:55.003487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.689 [2024-11-20 15:25:55.003496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:08.689 [2024-11-20 15:25:55.003505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.689 [2024-11-20 15:25:55.003518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-20 15:25:55.003532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.689 [2024-11-20 15:25:55.003574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.689 [2024-11-20 15:25:55.003580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.689 [2024-11-20 15:25:55.003584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.689 [2024-11-20 15:25:55.003593] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:08.689 [2024-11-20 15:25:55.003598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:08.689 [2024-11-20 15:25:55.003606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:18:08.689 [2024-11-20 15:25:55.003625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:08.689 [2024-11-20 15:25:55.003638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.689 [2024-11-20 15:25:55.003642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.689 [2024-11-20 15:25:55.003648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-20 15:25:55.003662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.689 [2024-11-20 15:25:55.003770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.689 [2024-11-20 15:25:55.003776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.690 [2024-11-20 15:25:55.003780] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003785] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121bd90): datao=0, datal=4096, cccid=0 00:18:08.690 [2024-11-20 15:25:55.003790] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125c600) on tqpair(0x121bd90): expected_datao=0, payload_size=4096 00:18:08.690 [2024-11-20 15:25:55.003796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003804] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003810] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.690 [2024-11-20 15:25:55.003824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.690 [2024-11-20 15:25:55.003828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.690 [2024-11-20 15:25:55.003841] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:18:08.690 [2024-11-20 15:25:55.003847] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:18:08.690 [2024-11-20 15:25:55.003851] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:18:08.690 [2024-11-20 15:25:55.003856] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:18:08.690 [2024-11-20 15:25:55.003861] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:18:08.690 [2024-11-20 15:25:55.003866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.003880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.003887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.003902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.690 [2024-11-20 15:25:55.003918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.690 [2024-11-20 15:25:55.003964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.690 [2024-11-20 15:25:55.003970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.690 [2024-11-20 15:25:55.003973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.690 [2024-11-20 15:25:55.003985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.003992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.003998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.690 [2024-11-20 15:25:55.004005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.004018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.690 [2024-11-20 15:25:55.004024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.004037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.690 [2024-11-20 15:25:55.004044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.004057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.690 [2024-11-20 15:25:55.004062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.004091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.690 [2024-11-20 15:25:55.004107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c600, cid 0, qid 0 00:18:08.690 [2024-11-20 15:25:55.004112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c780, cid 1, qid 0 00:18:08.690 [2024-11-20 15:25:55.004116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125c900, cid 2, qid 0 00:18:08.690 [2024-11-20 15:25:55.004121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ca80, cid 3, qid 0 00:18:08.690 [2024-11-20 15:25:55.004125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cc00, cid 4, qid 0 00:18:08.690 [2024-11-20 15:25:55.004204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.690 [2024-11-20 15:25:55.004210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.690 [2024-11-20 15:25:55.004214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cc00) on tqpair=0x121bd90 00:18:08.690 [2024-11-20 15:25:55.004224] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:18:08.690 [2024-11-20 15:25:55.004230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.004269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.690 [2024-11-20 15:25:55.004283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cc00, cid 4, qid 0 00:18:08.690 [2024-11-20 15:25:55.004329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.690 [2024-11-20 15:25:55.004335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.690 [2024-11-20 15:25:55.004339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cc00) on tqpair=0x121bd90 00:18:08.690 [2024-11-20 15:25:55.004397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.004425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.690 [2024-11-20 15:25:55.004439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cc00, cid 4, qid 0 00:18:08.690 [2024-11-20 15:25:55.004491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.690 [2024-11-20 15:25:55.004497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.690 [2024-11-20 15:25:55.004500] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004504] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121bd90): datao=0, datal=4096, cccid=4 00:18:08.690 [2024-11-20 15:25:55.004509] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125cc00) on tqpair(0x121bd90): expected_datao=0, payload_size=4096 00:18:08.690 [2024-11-20 15:25:55.004514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004521] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004525] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.690 [2024-11-20 15:25:55.004538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.690 [2024-11-20 15:25:55.004542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cc00) on tqpair=0x121bd90 00:18:08.690 [2024-11-20 15:25:55.004560] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:18:08.690 [2024-11-20 15:25:55.004575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:18:08.690 [2024-11-20 15:25:55.004591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121bd90) 00:18:08.690 [2024-11-20 15:25:55.004601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.690 [2024-11-20 15:25:55.004616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cc00, cid 4, qid 0 00:18:08.690 [2024-11-20 15:25:55.004700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.690 [2024-11-20 15:25:55.004706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.690 [2024-11-20 15:25:55.004710] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.690 [2024-11-20 15:25:55.004714] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121bd90): datao=0, datal=4096, cccid=4 00:18:08.691 [2024-11-20 15:25:55.004719] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125cc00) on tqpair(0x121bd90): expected_datao=0, payload_size=4096 00:18:08.691 [2024-11-20 15:25:55.004724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004730] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004734] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.691 [2024-11-20 15:25:55.004747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.691 [2024-11-20 15:25:55.004751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cc00) on tqpair=0x121bd90 00:18:08.691 [2024-11-20 15:25:55.004770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.004796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.004811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cc00, cid 4, qid 0 00:18:08.691 [2024-11-20 15:25:55.004869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.691 [2024-11-20 15:25:55.004874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.691 [2024-11-20 15:25:55.004878] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004882] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121bd90): datao=0, datal=4096, cccid=4 00:18:08.691 [2024-11-20 15:25:55.004887] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125cc00) on tqpair(0x121bd90): expected_datao=0, payload_size=4096 00:18:08.691 [2024-11-20 15:25:55.004891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004897] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004901] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.691 [2024-11-20 15:25:55.004914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.691 [2024-11-20 15:25:55.004918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.004922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cc00) on tqpair=0x121bd90 00:18:08.691 [2024-11-20 15:25:55.004929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004970] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:18:08.691 [2024-11-20 15:25:55.004975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:18:08.691 [2024-11-20 15:25:55.004981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:18:08.691 [2024-11-20 15:25:55.005002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.005021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.691 [2024-11-20 15:25:55.005055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cc00, cid 4, qid 0 00:18:08.691 [2024-11-20 15:25:55.005060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cd80, cid 5, qid 0 00:18:08.691 [2024-11-20 15:25:55.005115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.691 [2024-11-20 15:25:55.005121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.691 [2024-11-20 15:25:55.005124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cc00) on tqpair=0x121bd90 00:18:08.691 [2024-11-20 15:25:55.005135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.691 [2024-11-20 15:25:55.005141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.691 [2024-11-20 15:25:55.005144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cd80) on tqpair=0x121bd90 00:18:08.691 [2024-11-20 15:25:55.005157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.005180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cd80, cid 5, qid 0 00:18:08.691 [2024-11-20 15:25:55.005231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.691 [2024-11-20 15:25:55.005237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.691 [2024-11-20 15:25:55.005240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cd80) on tqpair=0x121bd90 00:18:08.691 [2024-11-20 15:25:55.005253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.005277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cd80, cid 5, qid 0 00:18:08.691 [2024-11-20 15:25:55.005319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.691 [2024-11-20 15:25:55.005324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.691 [2024-11-20 15:25:55.005328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cd80) on tqpair=0x121bd90 00:18:08.691 [2024-11-20 15:25:55.005341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.005364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cd80, cid 5, qid 0 00:18:08.691 [2024-11-20 15:25:55.005406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.691 [2024-11-20 15:25:55.005412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.691 [2024-11-20 15:25:55.005416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cd80) on tqpair=0x121bd90 00:18:08.691 [2024-11-20 15:25:55.005437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.005455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.005472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.005489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x121bd90) 00:18:08.691 [2024-11-20 15:25:55.005499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.691 [2024-11-20 15:25:55.005515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cd80, cid 5, qid 0 00:18:08.691 [2024-11-20 15:25:55.005520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cc00, cid 4, qid 0 00:18:08.691 [2024-11-20 15:25:55.005525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125cf00, cid 6, qid 0 00:18:08.691 [2024-11-20 15:25:55.005529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d080, cid 7, qid 0 00:18:08.691 [2024-11-20 15:25:55.005649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.691 [2024-11-20 15:25:55.005655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.691 [2024-11-20 15:25:55.005658] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005662] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121bd90): datao=0, datal=8192, cccid=5 00:18:08.691 [2024-11-20 15:25:55.005667] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125cd80) on tqpair(0x121bd90): expected_datao=0, payload_size=8192 00:18:08.691 [2024-11-20 15:25:55.005671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005696] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.691 [2024-11-20 15:25:55.005700] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.692 [2024-11-20 15:25:55.005710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.692 [2024-11-20 15:25:55.005714] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005718] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121bd90): datao=0, datal=512, cccid=4 00:18:08.692 [2024-11-20 15:25:55.005722] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125cc00) on tqpair(0x121bd90): expected_datao=0, payload_size=512 00:18:08.692 [2024-11-20 15:25:55.005727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005733] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005737] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.692 [2024-11-20 15:25:55.005747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.692 [2024-11-20 15:25:55.005751] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005755] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121bd90): datao=0, datal=512, cccid=6 00:18:08.692 [2024-11-20 15:25:55.005760] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125cf00) on tqpair(0x121bd90): expected_datao=0, payload_size=512 00:18:08.692 [2024-11-20 15:25:55.005764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005771] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005774] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:08.692 [2024-11-20 15:25:55.005785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:08.692 [2024-11-20 15:25:55.005789] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005792] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121bd90): datao=0, datal=4096, cccid=7 00:18:08.692 [2024-11-20 15:25:55.005797] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125d080) on tqpair(0x121bd90): expected_datao=0, payload_size=4096 00:18:08.692 [2024-11-20 15:25:55.005802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005809] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005812] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.692 [2024-11-20 15:25:55.005826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.692 [2024-11-20 15:25:55.005830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cd80) on tqpair=0x121bd90 00:18:08.692 [2024-11-20 15:25:55.005849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.692 [2024-11-20 15:25:55.005855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.692 [2024-11-20 15:25:55.005858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cc00) on tqpair=0x121bd90 00:18:08.692 [2024-11-20 15:25:55.005876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.692 [2024-11-20 15:25:55.005882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.692 [2024-11-20 15:25:55.005885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125cf00) on tqpair=0x121bd90 00:18:08.692 [2024-11-20 15:25:55.005896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.692 [2024-11-20 15:25:55.005902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.692 [2024-11-20 15:25:55.005906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.692 [2024-11-20 15:25:55.005909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d080) on tqpair=0x121bd90 00:18:08.692 ===================================================== 00:18:08.692 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.692 ===================================================== 00:18:08.692 Controller Capabilities/Features 00:18:08.692 ================================ 00:18:08.692 Vendor ID: 8086 00:18:08.692 Subsystem Vendor ID: 8086 00:18:08.692 Serial Number: SPDK00000000000001 00:18:08.692 Model Number: SPDK bdev Controller 00:18:08.692 Firmware Version: 25.01 00:18:08.692 Recommended Arb Burst: 6 00:18:08.692 IEEE OUI Identifier: e4 d2 5c 00:18:08.692 Multi-path I/O 00:18:08.692 May have multiple subsystem ports: Yes 00:18:08.692 May have multiple controllers: Yes 00:18:08.692 Associated with SR-IOV VF: No 00:18:08.692 Max Data Transfer Size: 131072 00:18:08.692 Max Number of Namespaces: 32 00:18:08.692 Max Number of I/O Queues: 127 00:18:08.692 NVMe Specification Version (VS): 1.3 00:18:08.692 NVMe Specification Version (Identify): 1.3 00:18:08.692 Maximum Queue Entries: 128 00:18:08.692 Contiguous Queues Required: Yes 00:18:08.692 Arbitration Mechanisms Supported 00:18:08.692 Weighted Round Robin: Not Supported 00:18:08.692 Vendor Specific: Not Supported 00:18:08.692 Reset Timeout: 15000 ms 00:18:08.692 Doorbell Stride: 4 bytes 00:18:08.692 NVM Subsystem Reset: Not Supported 00:18:08.692 Command Sets Supported 00:18:08.692 NVM Command Set: Supported 00:18:08.692 Boot Partition: Not Supported 00:18:08.692 Memory Page Size Minimum: 4096 bytes 00:18:08.692 Memory Page Size Maximum: 4096 bytes 00:18:08.692 Persistent Memory Region: Not Supported 00:18:08.692 Optional Asynchronous Events Supported 00:18:08.692 Namespace Attribute Notices: Supported 00:18:08.692 Firmware Activation Notices: Not Supported 00:18:08.692 ANA Change Notices: Not Supported 00:18:08.692 PLE Aggregate Log Change Notices: Not Supported 00:18:08.692 LBA Status Info Alert Notices: Not Supported 00:18:08.692 EGE Aggregate Log Change Notices: Not Supported 00:18:08.692 Normal NVM Subsystem Shutdown event: Not Supported 00:18:08.692 Zone Descriptor Change Notices: Not Supported 00:18:08.692 Discovery Log Change Notices: Not Supported 00:18:08.692 Controller Attributes 00:18:08.692 128-bit Host Identifier: Supported 00:18:08.692 Non-Operational Permissive Mode: Not Supported 00:18:08.692 NVM Sets: Not Supported 00:18:08.692 Read Recovery Levels: Not Supported 00:18:08.692 Endurance Groups: Not Supported 00:18:08.692 Predictable Latency Mode: Not Supported 00:18:08.692 Traffic Based Keep ALive: Not Supported 00:18:08.692 Namespace Granularity: Not Supported 00:18:08.692 SQ Associations: Not Supported 00:18:08.692 UUID List: Not Supported 00:18:08.692 Multi-Domain Subsystem: Not Supported 00:18:08.692 Fixed Capacity Management: Not Supported 00:18:08.692 Variable Capacity Management: Not Supported 00:18:08.692 Delete Endurance Group: Not Supported 00:18:08.692 Delete NVM Set: Not Supported 00:18:08.692 Extended LBA Formats Supported: Not Supported 00:18:08.692 Flexible Data Placement Supported: Not Supported 00:18:08.692 00:18:08.692 Controller Memory Buffer Support 00:18:08.692 ================================ 00:18:08.692 Supported: No 00:18:08.692 00:18:08.692 Persistent Memory Region Support 00:18:08.692 ================================ 00:18:08.692 Supported: No 00:18:08.692 00:18:08.692 Admin Command Set Attributes 00:18:08.692 ============================ 00:18:08.692 Security Send/Receive: Not Supported 00:18:08.692 Format NVM: Not Supported 00:18:08.692 Firmware Activate/Download: Not Supported 00:18:08.692 Namespace Management: Not Supported 00:18:08.692 Device Self-Test: Not Supported 00:18:08.692 Directives: Not Supported 00:18:08.692 NVMe-MI: Not Supported 00:18:08.692 Virtualization Management: Not Supported 00:18:08.692 Doorbell Buffer Config: Not Supported 00:18:08.692 Get LBA Status Capability: Not Supported 00:18:08.692 Command & Feature Lockdown Capability: Not Supported 00:18:08.692 Abort Command Limit: 4 00:18:08.692 Async Event Request Limit: 4 00:18:08.692 Number of Firmware Slots: N/A 00:18:08.692 Firmware Slot 1 Read-Only: N/A 00:18:08.692 Firmware Activation Without Reset: N/A 00:18:08.692 Multiple Update Detection Support: N/A 00:18:08.692 Firmware Update Granularity: No Information Provided 00:18:08.692 Per-Namespace SMART Log: No 00:18:08.692 Asymmetric Namespace Access Log Page: Not Supported 00:18:08.692 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:08.692 Command Effects Log Page: Supported 00:18:08.692 Get Log Page Extended Data: Supported 00:18:08.692 Telemetry Log Pages: Not Supported 00:18:08.692 Persistent Event Log Pages: Not Supported 00:18:08.692 Supported Log Pages Log Page: May Support 00:18:08.692 Commands Supported & Effects Log Page: Not Supported 00:18:08.692 Feature Identifiers & Effects Log Page:May Support 00:18:08.692 NVMe-MI Commands & Effects Log Page: May Support 00:18:08.692 Data Area 4 for Telemetry Log: Not Supported 00:18:08.692 Error Log Page Entries Supported: 128 00:18:08.692 Keep Alive: Supported 00:18:08.692 Keep Alive Granularity: 10000 ms 00:18:08.692 00:18:08.692 NVM Command Set Attributes 00:18:08.692 ========================== 00:18:08.692 Submission Queue Entry Size 00:18:08.692 Max: 64 00:18:08.692 Min: 64 00:18:08.692 Completion Queue Entry Size 00:18:08.692 Max: 16 00:18:08.692 Min: 16 00:18:08.692 Number of Namespaces: 32 00:18:08.692 Compare Command: Supported 00:18:08.692 Write Uncorrectable Command: Not Supported 00:18:08.692 Dataset Management Command: Supported 00:18:08.692 Write Zeroes Command: Supported 00:18:08.692 Set Features Save Field: Not Supported 00:18:08.692 Reservations: Supported 00:18:08.693 Timestamp: Not Supported 00:18:08.693 Copy: Supported 00:18:08.693 Volatile Write Cache: Present 00:18:08.693 Atomic Write Unit (Normal): 1 00:18:08.693 Atomic Write Unit (PFail): 1 00:18:08.693 Atomic Compare & Write Unit: 1 00:18:08.693 Fused Compare & Write: Supported 00:18:08.693 Scatter-Gather List 00:18:08.693 SGL Command Set: Supported 00:18:08.693 SGL Keyed: Supported 00:18:08.693 SGL Bit Bucket Descriptor: Not Supported 00:18:08.693 SGL Metadata Pointer: Not Supported 00:18:08.693 Oversized SGL: Not Supported 00:18:08.693 SGL Metadata Address: Not Supported 00:18:08.693 SGL Offset: Supported 00:18:08.693 Transport SGL Data Block: Not Supported 00:18:08.693 Replay Protected Memory Block: Not Supported 00:18:08.693 00:18:08.693 Firmware Slot Information 00:18:08.693 ========================= 00:18:08.693 Active slot: 1 00:18:08.693 Slot 1 Firmware Revision: 25.01 00:18:08.693 00:18:08.693 00:18:08.693 Commands Supported and Effects 00:18:08.693 ============================== 00:18:08.693 Admin Commands 00:18:08.693 -------------- 00:18:08.693 Get Log Page (02h): Supported 00:18:08.693 Identify (06h): Supported 00:18:08.693 Abort (08h): Supported 00:18:08.693 Set Features (09h): Supported 00:18:08.693 Get Features (0Ah): Supported 00:18:08.693 Asynchronous Event Request (0Ch): Supported 00:18:08.693 Keep Alive (18h): Supported 00:18:08.693 I/O Commands 00:18:08.693 ------------ 00:18:08.693 Flush (00h): Supported LBA-Change 00:18:08.693 Write (01h): Supported LBA-Change 00:18:08.693 Read (02h): Supported 00:18:08.693 Compare (05h): Supported 00:18:08.693 Write Zeroes (08h): Supported LBA-Change 00:18:08.693 Dataset Management (09h): Supported LBA-Change 00:18:08.693 Copy (19h): Supported LBA-Change 00:18:08.693 00:18:08.693 Error Log 00:18:08.693 ========= 00:18:08.693 00:18:08.693 Arbitration 00:18:08.693 =========== 00:18:08.693 Arbitration Burst: 1 00:18:08.693 00:18:08.693 Power Management 00:18:08.693 ================ 00:18:08.693 Number of Power States: 1 00:18:08.693 Current Power State: Power State #0 00:18:08.693 Power State #0: 00:18:08.693 Max Power: 0.00 W 00:18:08.693 Non-Operational State: Operational 00:18:08.693 Entry Latency: Not Reported 00:18:08.693 Exit Latency: Not Reported 00:18:08.693 Relative Read Throughput: 0 00:18:08.693 Relative Read Latency: 0 00:18:08.693 Relative Write Throughput: 0 00:18:08.693 Relative Write Latency: 0 00:18:08.693 Idle Power: Not Reported 00:18:08.693 Active Power: Not Reported 00:18:08.693 Non-Operational Permissive Mode: Not Supported 00:18:08.693 00:18:08.693 Health Information 00:18:08.693 ================== 00:18:08.693 Critical Warnings: 00:18:08.693 Available Spare Space: OK 00:18:08.693 Temperature: OK 00:18:08.693 Device Reliability: OK 00:18:08.693 Read Only: No 00:18:08.693 Volatile Memory Backup: OK 00:18:08.693 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:08.693 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:08.693 Available Spare: 0% 00:18:08.693 Available Spare Threshold: 0% 00:18:08.693 Life Percentage Used:[2024-11-20 15:25:55.006015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x121bd90) 00:18:08.693 [2024-11-20 15:25:55.006027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.693 [2024-11-20 15:25:55.006046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d080, cid 7, qid 0 00:18:08.693 [2024-11-20 15:25:55.006098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.693 [2024-11-20 15:25:55.006104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.693 [2024-11-20 15:25:55.006107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d080) on tqpair=0x121bd90 00:18:08.693 [2024-11-20 15:25:55.006159] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:18:08.693 [2024-11-20 15:25:55.006169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c600) on tqpair=0x121bd90 00:18:08.693 [2024-11-20 15:25:55.006176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.693 [2024-11-20 15:25:55.006181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c780) on tqpair=0x121bd90 00:18:08.693 [2024-11-20 15:25:55.006186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.693 [2024-11-20 15:25:55.006191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125c900) on tqpair=0x121bd90 00:18:08.693 [2024-11-20 15:25:55.006196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.693 [2024-11-20 15:25:55.006201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ca80) on tqpair=0x121bd90 00:18:08.693 [2024-11-20 15:25:55.006205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.693 [2024-11-20 15:25:55.006214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121bd90) 00:18:08.693 [2024-11-20 15:25:55.006237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.693 [2024-11-20 15:25:55.006255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ca80, cid 3, qid 0 00:18:08.693 [2024-11-20 15:25:55.006300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.693 [2024-11-20 15:25:55.006306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.693 [2024-11-20 15:25:55.006309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ca80) on tqpair=0x121bd90 00:18:08.693 [2024-11-20 15:25:55.006321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121bd90) 00:18:08.693 [2024-11-20 15:25:55.006334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.693 [2024-11-20 15:25:55.006351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ca80, cid 3, qid 0 00:18:08.693 [2024-11-20 15:25:55.006415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.693 [2024-11-20 15:25:55.006420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.693 [2024-11-20 15:25:55.006424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ca80) on tqpair=0x121bd90 00:18:08.693 [2024-11-20 15:25:55.006433] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:18:08.693 [2024-11-20 15:25:55.006438] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:18:08.693 [2024-11-20 15:25:55.006447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121bd90) 00:18:08.693 [2024-11-20 15:25:55.006461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.693 [2024-11-20 15:25:55.006475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ca80, cid 3, qid 0 00:18:08.693 [2024-11-20 15:25:55.006516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.693 [2024-11-20 15:25:55.006522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.693 [2024-11-20 15:25:55.006526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ca80) on tqpair=0x121bd90 00:18:08.693 [2024-11-20 15:25:55.006539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.693 [2024-11-20 15:25:55.006544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.694 [2024-11-20 15:25:55.006547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121bd90) 00:18:08.694 [2024-11-20 15:25:55.006553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.694 [2024-11-20 15:25:55.006567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ca80, cid 3, qid 0 00:18:08.694 [2024-11-20 15:25:55.006615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.694 [2024-11-20 15:25:55.006621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.694 [2024-11-20 15:25:55.006624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.694 [2024-11-20 15:25:55.006628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ca80) on tqpair=0x121bd90 00:18:08.694 [2024-11-20 15:25:55.006637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.694 [2024-11-20 15:25:55.006641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.694 [2024-11-20 15:25:55.006644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121bd90) 00:18:08.694 [2024-11-20 15:25:55.006651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.694 [2024-11-20 15:25:55.006664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ca80, cid 3, qid 0 00:18:08.694 [2024-11-20 15:25:55.010695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.694 [2024-11-20 15:25:55.010712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.694 [2024-11-20 15:25:55.010717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.694 [2024-11-20 15:25:55.010721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ca80) on tqpair=0x121bd90 00:18:08.694 [2024-11-20 15:25:55.010739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:08.694 [2024-11-20 15:25:55.010743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:08.694 [2024-11-20 15:25:55.010747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121bd90) 00:18:08.694 [2024-11-20 15:25:55.010757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.694 [2024-11-20 15:25:55.010790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ca80, cid 3, qid 0 00:18:08.694 [2024-11-20 15:25:55.010840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:08.694 [2024-11-20 15:25:55.010846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:08.694 [2024-11-20 15:25:55.010850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:08.694 [2024-11-20 15:25:55.010854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ca80) on tqpair=0x121bd90 00:18:08.694 [2024-11-20 15:25:55.010861] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:18:08.694 0% 00:18:08.694 Data Units Read: 0 00:18:08.694 Data Units Written: 0 00:18:08.694 Host Read Commands: 0 00:18:08.694 Host Write Commands: 0 00:18:08.694 Controller Busy Time: 0 minutes 00:18:08.694 Power Cycles: 0 00:18:08.694 Power On Hours: 0 hours 00:18:08.694 Unsafe Shutdowns: 0 00:18:08.694 Unrecoverable Media Errors: 0 00:18:08.694 Lifetime Error Log Entries: 0 00:18:08.694 Warning Temperature Time: 0 minutes 00:18:08.694 Critical Temperature Time: 0 minutes 00:18:08.694 00:18:08.694 Number of Queues 00:18:08.694 ================ 00:18:08.694 Number of I/O Submission Queues: 127 00:18:08.694 Number of I/O Completion Queues: 127 00:18:08.694 00:18:08.694 Active Namespaces 00:18:08.694 ================= 00:18:08.694 Namespace ID:1 00:18:08.694 Error Recovery Timeout: Unlimited 00:18:08.694 Command Set Identifier: NVM (00h) 00:18:08.694 Deallocate: Supported 00:18:08.694 Deallocated/Unwritten Error: Not Supported 00:18:08.694 Deallocated Read Value: Unknown 00:18:08.694 Deallocate in Write Zeroes: Not Supported 00:18:08.694 Deallocated Guard Field: 0xFFFF 00:18:08.694 Flush: Supported 00:18:08.694 Reservation: Supported 00:18:08.694 Namespace Sharing Capabilities: Multiple Controllers 00:18:08.694 Size (in LBAs): 131072 (0GiB) 00:18:08.694 Capacity (in LBAs): 131072 (0GiB) 00:18:08.694 Utilization (in LBAs): 131072 (0GiB) 00:18:08.694 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:08.694 EUI64: ABCDEF0123456789 00:18:08.694 UUID: 78706779-9c7b-45ce-85e2-09e6a65e467f 00:18:08.694 Thin Provisioning: Not Supported 00:18:08.694 Per-NS Atomic Units: Yes 00:18:08.694 Atomic Boundary Size (Normal): 0 00:18:08.694 Atomic Boundary Size (PFail): 0 00:18:08.694 Atomic Boundary Offset: 0 00:18:08.694 Maximum Single Source Range Length: 65535 00:18:08.694 Maximum Copy Length: 65535 00:18:08.694 Maximum Source Range Count: 1 00:18:08.694 NGUID/EUI64 Never Reused: No 00:18:08.694 Namespace Write Protected: No 00:18:08.694 Number of LBA Formats: 1 00:18:08.694 Current LBA Format: LBA Format #00 00:18:08.694 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:08.694 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.694 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:08.694 rmmod nvme_tcp 00:18:08.694 rmmod nvme_fabrics 00:18:08.694 rmmod nvme_keyring 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 87860 ']' 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 87860 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 87860 ']' 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 87860 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87860 00:18:08.954 killing process with pid 87860 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87860' 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 87860 00:18:08.954 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 87860 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:09.213 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:18:09.473 00:18:09.473 real 0m3.542s 00:18:09.473 user 0m8.162s 00:18:09.473 sys 0m1.140s 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.473 15:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:09.473 ************************************ 00:18:09.473 END TEST nvmf_identify 00:18:09.473 ************************************ 00:18:09.732 15:25:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:09.732 15:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:09.732 15:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.732 15:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.732 ************************************ 00:18:09.732 START TEST nvmf_perf 00:18:09.732 ************************************ 00:18:09.732 15:25:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:09.732 * Looking for test storage... 00:18:09.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:09.732 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.732 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.732 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.992 --rc genhtml_branch_coverage=1 00:18:09.992 --rc genhtml_function_coverage=1 00:18:09.992 --rc genhtml_legend=1 00:18:09.992 --rc geninfo_all_blocks=1 00:18:09.992 --rc geninfo_unexecuted_blocks=1 00:18:09.992 00:18:09.992 ' 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.992 --rc genhtml_branch_coverage=1 00:18:09.992 --rc genhtml_function_coverage=1 00:18:09.992 --rc genhtml_legend=1 00:18:09.992 --rc geninfo_all_blocks=1 00:18:09.992 --rc geninfo_unexecuted_blocks=1 00:18:09.992 00:18:09.992 ' 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.992 --rc genhtml_branch_coverage=1 00:18:09.992 --rc genhtml_function_coverage=1 00:18:09.992 --rc genhtml_legend=1 00:18:09.992 --rc geninfo_all_blocks=1 00:18:09.992 --rc geninfo_unexecuted_blocks=1 00:18:09.992 00:18:09.992 ' 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.992 --rc genhtml_branch_coverage=1 00:18:09.992 --rc genhtml_function_coverage=1 00:18:09.992 --rc genhtml_legend=1 00:18:09.992 --rc geninfo_all_blocks=1 00:18:09.992 --rc geninfo_unexecuted_blocks=1 00:18:09.992 00:18:09.992 ' 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.992 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:09.993 Cannot find device "nvmf_init_br" 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:09.993 Cannot find device "nvmf_init_br2" 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:09.993 Cannot find device "nvmf_tgt_br" 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.993 Cannot find device "nvmf_tgt_br2" 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:09.993 Cannot find device "nvmf_init_br" 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:09.993 Cannot find device "nvmf_init_br2" 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:09.993 Cannot find device "nvmf_tgt_br" 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:09.993 Cannot find device "nvmf_tgt_br2" 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:18:09.993 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:10.252 Cannot find device "nvmf_br" 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:10.252 Cannot find device "nvmf_init_if" 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:10.252 Cannot find device "nvmf_init_if2" 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:10.252 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.253 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.253 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.253 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:10.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.178 ms 00:18:10.511 00:18:10.511 --- 10.0.0.3 ping statistics --- 00:18:10.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.511 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:10.511 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:10.511 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:18:10.511 00:18:10.511 --- 10.0.0.4 ping statistics --- 00:18:10.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.511 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:18:10.511 00:18:10.511 --- 10.0.0.1 ping statistics --- 00:18:10.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.511 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:10.511 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:10.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:18:10.512 00:18:10.512 --- 10.0.0.2 ping statistics --- 00:18:10.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.512 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=88145 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 88145 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 88145 ']' 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:10.512 15:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:10.512 [2024-11-20 15:25:56.974340] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:10.512 [2024-11-20 15:25:56.974449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.771 [2024-11-20 15:25:57.129550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.771 [2024-11-20 15:25:57.214954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.771 [2024-11-20 15:25:57.215044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.771 [2024-11-20 15:25:57.215063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.771 [2024-11-20 15:25:57.215078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.771 [2024-11-20 15:25:57.215089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.771 [2024-11-20 15:25:57.216577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.771 [2024-11-20 15:25:57.216749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.771 [2024-11-20 15:25:57.216835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.771 [2024-11-20 15:25:57.216835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.707 15:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.707 15:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:18:11.707 15:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:11.707 15:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.707 15:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:11.707 15:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.707 15:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:11.707 15:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:11.967 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:11.967 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:12.225 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:12.225 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.531 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:12.531 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:12.531 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:12.531 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:12.531 15:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.794 [2024-11-20 15:25:59.136886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.794 15:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.053 15:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:13.053 15:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.312 15:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:13.312 15:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:13.572 15:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:13.830 [2024-11-20 15:26:00.061173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:13.830 15:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:14.089 15:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:14.089 15:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:14.089 15:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:14.089 15:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:15.026 Initializing NVMe Controllers 00:18:15.026 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:15.026 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:15.026 Initialization complete. Launching workers. 00:18:15.026 ======================================================== 00:18:15.026 Latency(us) 00:18:15.026 Device Information : IOPS MiB/s Average min max 00:18:15.026 PCIE (0000:00:10.0) NSID 1 from core 0: 20286.79 79.25 1577.97 361.49 8151.51 00:18:15.026 ======================================================== 00:18:15.026 Total : 20286.79 79.25 1577.97 361.49 8151.51 00:18:15.026 00:18:15.026 15:26:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:16.406 Initializing NVMe Controllers 00:18:16.406 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:16.406 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:16.406 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:16.406 Initialization complete. Launching workers. 00:18:16.406 ======================================================== 00:18:16.406 Latency(us) 00:18:16.406 Device Information : IOPS MiB/s Average min max 00:18:16.406 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4231.00 16.53 235.13 83.41 4335.93 00:18:16.406 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8175.39 7903.91 12045.90 00:18:16.406 ======================================================== 00:18:16.406 Total : 4354.00 17.01 459.44 83.41 12045.90 00:18:16.406 00:18:16.406 15:26:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:17.786 Initializing NVMe Controllers 00:18:17.786 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.786 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:17.786 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:17.786 Initialization complete. Launching workers. 00:18:17.786 ======================================================== 00:18:17.786 Latency(us) 00:18:17.786 Device Information : IOPS MiB/s Average min max 00:18:17.786 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10356.98 40.46 3090.21 619.06 6661.80 00:18:17.786 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2692.99 10.52 11970.66 7050.31 20383.24 00:18:17.786 ======================================================== 00:18:17.786 Total : 13049.97 50.98 4922.78 619.06 20383.24 00:18:17.786 00:18:17.786 15:26:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:17.786 15:26:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:20.368 Initializing NVMe Controllers 00:18:20.368 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.369 Controller IO queue size 128, less than required. 00:18:20.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:20.369 Controller IO queue size 128, less than required. 00:18:20.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:20.369 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:20.369 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:20.369 Initialization complete. Launching workers. 00:18:20.369 ======================================================== 00:18:20.369 Latency(us) 00:18:20.369 Device Information : IOPS MiB/s Average min max 00:18:20.369 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1578.11 394.53 83018.42 57891.81 153298.77 00:18:20.369 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.68 142.42 229515.04 116438.27 344026.37 00:18:20.369 ======================================================== 00:18:20.369 Total : 2147.79 536.95 121875.13 57891.81 344026.37 00:18:20.369 00:18:20.369 15:26:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:20.628 Initializing NVMe Controllers 00:18:20.628 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.628 Controller IO queue size 128, less than required. 00:18:20.628 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:20.628 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:20.628 Controller IO queue size 128, less than required. 00:18:20.628 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:20.628 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:20.628 WARNING: Some requested NVMe devices were skipped 00:18:20.628 No valid NVMe controllers or AIO or URING devices found 00:18:20.628 15:26:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:23.165 Initializing NVMe Controllers 00:18:23.165 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:23.165 Controller IO queue size 128, less than required. 00:18:23.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:23.165 Controller IO queue size 128, less than required. 00:18:23.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:23.165 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:23.165 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:23.165 Initialization complete. Launching workers. 00:18:23.165 00:18:23.165 ==================== 00:18:23.165 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:23.165 TCP transport: 00:18:23.165 polls: 7318 00:18:23.165 idle_polls: 4861 00:18:23.165 sock_completions: 2457 00:18:23.165 nvme_completions: 4795 00:18:23.165 submitted_requests: 7130 00:18:23.165 queued_requests: 1 00:18:23.165 00:18:23.165 ==================== 00:18:23.165 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:23.165 TCP transport: 00:18:23.165 polls: 7614 00:18:23.165 idle_polls: 5145 00:18:23.165 sock_completions: 2469 00:18:23.165 nvme_completions: 5039 00:18:23.165 submitted_requests: 7602 00:18:23.165 queued_requests: 1 00:18:23.165 ======================================================== 00:18:23.165 Latency(us) 00:18:23.165 Device Information : IOPS MiB/s Average min max 00:18:23.165 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1197.66 299.41 109466.83 69588.12 162658.42 00:18:23.165 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1258.61 314.65 102290.44 51639.22 143888.39 00:18:23.165 ======================================================== 00:18:23.165 Total : 2456.27 614.07 105789.59 51639.22 162658.42 00:18:23.165 00:18:23.165 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:23.165 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.424 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.424 rmmod nvme_tcp 00:18:23.424 rmmod nvme_fabrics 00:18:23.424 rmmod nvme_keyring 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 88145 ']' 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 88145 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 88145 ']' 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 88145 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88145 00:18:23.684 killing process with pid 88145 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88145' 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 88145 00:18:23.684 15:26:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 88145 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:24.252 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:24.544 ************************************ 00:18:24.544 END TEST nvmf_perf 00:18:24.544 ************************************ 00:18:24.544 00:18:24.544 real 0m14.978s 00:18:24.544 user 0m52.186s 00:18:24.544 sys 0m4.601s 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.544 15:26:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.829 ************************************ 00:18:24.829 START TEST nvmf_fio_host 00:18:24.829 ************************************ 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:24.829 * Looking for test storage... 00:18:24.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.829 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.830 --rc genhtml_branch_coverage=1 00:18:24.830 --rc genhtml_function_coverage=1 00:18:24.830 --rc genhtml_legend=1 00:18:24.830 --rc geninfo_all_blocks=1 00:18:24.830 --rc geninfo_unexecuted_blocks=1 00:18:24.830 00:18:24.830 ' 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.830 --rc genhtml_branch_coverage=1 00:18:24.830 --rc genhtml_function_coverage=1 00:18:24.830 --rc genhtml_legend=1 00:18:24.830 --rc geninfo_all_blocks=1 00:18:24.830 --rc geninfo_unexecuted_blocks=1 00:18:24.830 00:18:24.830 ' 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.830 --rc genhtml_branch_coverage=1 00:18:24.830 --rc genhtml_function_coverage=1 00:18:24.830 --rc genhtml_legend=1 00:18:24.830 --rc geninfo_all_blocks=1 00:18:24.830 --rc geninfo_unexecuted_blocks=1 00:18:24.830 00:18:24.830 ' 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.830 --rc genhtml_branch_coverage=1 00:18:24.830 --rc genhtml_function_coverage=1 00:18:24.830 --rc genhtml_legend=1 00:18:24.830 --rc geninfo_all_blocks=1 00:18:24.830 --rc geninfo_unexecuted_blocks=1 00:18:24.830 00:18:24.830 ' 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.830 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.096 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.097 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:25.097 Cannot find device "nvmf_init_br" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:25.097 Cannot find device "nvmf_init_br2" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:25.097 Cannot find device "nvmf_tgt_br" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.097 Cannot find device "nvmf_tgt_br2" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:25.097 Cannot find device "nvmf_init_br" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:25.097 Cannot find device "nvmf_init_br2" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:25.097 Cannot find device "nvmf_tgt_br" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:25.097 Cannot find device "nvmf_tgt_br2" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:25.097 Cannot find device "nvmf_br" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:25.097 Cannot find device "nvmf_init_if" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:25.097 Cannot find device "nvmf_init_if2" 00:18:25.097 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:25.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:25.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.142 ms 00:18:25.358 00:18:25.358 --- 10.0.0.3 ping statistics --- 00:18:25.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.358 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:18:25.358 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:25.618 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:25.618 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 00:18:25.618 00:18:25.618 --- 10.0.0.4 ping statistics --- 00:18:25.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.618 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:25.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:18:25.618 00:18:25.618 --- 10.0.0.1 ping statistics --- 00:18:25.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.618 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:25.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:18:25.618 00:18:25.618 --- 10.0.0.2 ping statistics --- 00:18:25.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.618 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88679 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88679 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 88679 ']' 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.618 15:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:25.618 [2024-11-20 15:26:11.962359] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:25.618 [2024-11-20 15:26:11.962455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.878 [2024-11-20 15:26:12.103534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.878 [2024-11-20 15:26:12.187946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.878 [2024-11-20 15:26:12.188021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.878 [2024-11-20 15:26:12.188032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.878 [2024-11-20 15:26:12.188041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.878 [2024-11-20 15:26:12.188049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.878 [2024-11-20 15:26:12.189446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.878 [2024-11-20 15:26:12.189644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.878 [2024-11-20 15:26:12.189739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.878 [2024-11-20 15:26:12.189815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.816 15:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.816 15:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:18:26.816 15:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:26.816 [2024-11-20 15:26:13.205635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.816 15:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:26.817 15:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.817 15:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.817 15:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:27.386 Malloc1 00:18:27.386 15:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.386 15:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:27.646 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:27.906 [2024-11-20 15:26:14.323325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:27.906 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:28.166 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:28.425 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:28.425 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:28.426 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:28.426 15:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:28.426 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:28.426 fio-3.35 00:18:28.426 Starting 1 thread 00:18:30.965 00:18:30.965 test: (groupid=0, jobs=1): err= 0: pid=88809: Wed Nov 20 15:26:17 2024 00:18:30.965 read: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(89.8MiB/2005msec) 00:18:30.965 slat (nsec): min=1546, max=427226, avg=1728.94, stdev=3439.85 00:18:30.965 clat (usec): min=2934, max=13417, avg=5867.84, stdev=537.13 00:18:30.965 lat (usec): min=2936, max=13428, avg=5869.57, stdev=537.37 00:18:30.965 clat percentiles (usec): 00:18:30.965 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5538], 00:18:30.965 | 30.00th=[ 5669], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:18:30.965 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:18:30.965 | 99.00th=[ 7242], 99.50th=[ 8979], 99.90th=[11863], 99.95th=[12780], 00:18:30.965 | 99.99th=[13435] 00:18:30.965 bw ( KiB/s): min=44920, max=46560, per=99.91%, avg=45834.75, stdev=780.84, samples=4 00:18:30.965 iops : min=11230, max=11640, avg=11458.50, stdev=195.03, samples=4 00:18:30.965 write: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(89.2MiB/2005msec); 0 zone resets 00:18:30.965 slat (nsec): min=1587, max=329452, avg=1860.03, stdev=2357.32 00:18:30.965 clat (usec): min=2240, max=9740, avg=5301.07, stdev=404.32 00:18:30.965 lat (usec): min=2242, max=9742, avg=5302.93, stdev=404.55 00:18:30.965 clat percentiles (usec): 00:18:30.965 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5014], 00:18:30.965 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5342], 00:18:30.965 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5735], 95.00th=[ 5866], 00:18:30.965 | 99.00th=[ 6259], 99.50th=[ 6849], 99.90th=[ 8586], 99.95th=[ 9110], 00:18:30.965 | 99.99th=[ 9634] 00:18:30.965 bw ( KiB/s): min=45184, max=45824, per=99.93%, avg=45511.00, stdev=346.20, samples=4 00:18:30.965 iops : min=11296, max=11456, avg=11377.75, stdev=86.55, samples=4 00:18:30.965 lat (msec) : 4=0.41%, 10=99.50%, 20=0.10% 00:18:30.965 cpu : usr=66.82%, sys=26.30%, ctx=33, majf=0, minf=7 00:18:30.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:30.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.965 issued rwts: total=22994,22829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.965 00:18:30.965 Run status group 0 (all jobs): 00:18:30.965 READ: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=89.8MiB (94.2MB), run=2005-2005msec 00:18:30.965 WRITE: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=89.2MiB (93.5MB), run=2005-2005msec 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:30.965 15:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:30.965 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:30.965 fio-3.35 00:18:30.965 Starting 1 thread 00:18:33.508 00:18:33.508 test: (groupid=0, jobs=1): err= 0: pid=88858: Wed Nov 20 15:26:19 2024 00:18:33.508 read: IOPS=9155, BW=143MiB/s (150MB/s)(287MiB/2006msec) 00:18:33.508 slat (nsec): min=2485, max=95056, avg=2790.51, stdev=1580.70 00:18:33.508 clat (usec): min=2358, max=17686, avg=8207.36, stdev=1903.92 00:18:33.508 lat (usec): min=2360, max=17693, avg=8210.15, stdev=1904.03 00:18:33.508 clat percentiles (usec): 00:18:33.508 | 1.00th=[ 4178], 5.00th=[ 5014], 10.00th=[ 5669], 20.00th=[ 6587], 00:18:33.508 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8291], 60.00th=[ 8586], 00:18:33.508 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[11469], 00:18:33.508 | 99.00th=[13042], 99.50th=[13566], 99.90th=[15008], 99.95th=[15270], 00:18:33.508 | 99.99th=[15795] 00:18:33.508 bw ( KiB/s): min=67232, max=77024, per=49.32%, avg=72240.00, stdev=4075.74, samples=4 00:18:33.508 iops : min= 4202, max= 4814, avg=4515.00, stdev=254.73, samples=4 00:18:33.508 write: IOPS=5205, BW=81.3MiB/s (85.3MB/s)(147MiB/1812msec); 0 zone resets 00:18:33.508 slat (usec): min=28, max=363, avg=30.99, stdev= 7.77 00:18:33.508 clat (usec): min=3514, max=19294, avg=10298.26, stdev=2125.48 00:18:33.508 lat (usec): min=3544, max=19323, avg=10329.25, stdev=2126.46 00:18:33.508 clat percentiles (usec): 00:18:33.508 | 1.00th=[ 6652], 5.00th=[ 7373], 10.00th=[ 7767], 20.00th=[ 8455], 00:18:33.508 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10552], 00:18:33.508 | 70.00th=[11338], 80.00th=[11994], 90.00th=[13304], 95.00th=[14222], 00:18:33.508 | 99.00th=[15664], 99.50th=[16319], 99.90th=[18482], 99.95th=[19006], 00:18:33.508 | 99.99th=[19268] 00:18:33.508 bw ( KiB/s): min=69536, max=79680, per=89.98%, avg=74936.00, stdev=4194.59, samples=4 00:18:33.508 iops : min= 4346, max= 4980, avg=4683.50, stdev=262.16, samples=4 00:18:33.508 lat (msec) : 4=0.58%, 10=71.66%, 20=27.77% 00:18:33.508 cpu : usr=72.62%, sys=19.15%, ctx=4, majf=0, minf=10 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:33.508 issued rwts: total=18365,9432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:33.508 00:18:33.508 Run status group 0 (all jobs): 00:18:33.509 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=287MiB (301MB), run=2006-2006msec 00:18:33.509 WRITE: bw=81.3MiB/s (85.3MB/s), 81.3MiB/s-81.3MiB/s (85.3MB/s-85.3MB/s), io=147MiB (155MB), run=1812-1812msec 00:18:33.509 15:26:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.509 15:26:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:33.509 15:26:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:33.509 15:26:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:33.509 15:26:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:33.509 15:26:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.509 15:26:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.768 rmmod nvme_tcp 00:18:33.768 rmmod nvme_fabrics 00:18:33.768 rmmod nvme_keyring 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 88679 ']' 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 88679 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 88679 ']' 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 88679 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88679 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88679' 00:18:33.768 killing process with pid 88679 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 88679 00:18:33.768 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 88679 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.027 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:34.286 00:18:34.286 real 0m9.672s 00:18:34.286 user 0m36.959s 00:18:34.286 sys 0m3.045s 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.286 15:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 ************************************ 00:18:34.286 END TEST nvmf_fio_host 00:18:34.286 ************************************ 00:18:34.545 15:26:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:34.545 15:26:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.545 15:26:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.545 15:26:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.545 ************************************ 00:18:34.545 START TEST nvmf_failover 00:18:34.545 ************************************ 00:18:34.545 15:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:34.545 * Looking for test storage... 00:18:34.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:34.545 15:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:34.545 15:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:34.545 15:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.545 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.805 --rc genhtml_branch_coverage=1 00:18:34.805 --rc genhtml_function_coverage=1 00:18:34.805 --rc genhtml_legend=1 00:18:34.805 --rc geninfo_all_blocks=1 00:18:34.805 --rc geninfo_unexecuted_blocks=1 00:18:34.805 00:18:34.805 ' 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.805 --rc genhtml_branch_coverage=1 00:18:34.805 --rc genhtml_function_coverage=1 00:18:34.805 --rc genhtml_legend=1 00:18:34.805 --rc geninfo_all_blocks=1 00:18:34.805 --rc geninfo_unexecuted_blocks=1 00:18:34.805 00:18:34.805 ' 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.805 --rc genhtml_branch_coverage=1 00:18:34.805 --rc genhtml_function_coverage=1 00:18:34.805 --rc genhtml_legend=1 00:18:34.805 --rc geninfo_all_blocks=1 00:18:34.805 --rc geninfo_unexecuted_blocks=1 00:18:34.805 00:18:34.805 ' 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.805 --rc genhtml_branch_coverage=1 00:18:34.805 --rc genhtml_function_coverage=1 00:18:34.805 --rc genhtml_legend=1 00:18:34.805 --rc geninfo_all_blocks=1 00:18:34.805 --rc geninfo_unexecuted_blocks=1 00:18:34.805 00:18:34.805 ' 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.805 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:34.806 Cannot find device "nvmf_init_br" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:34.806 Cannot find device "nvmf_init_br2" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:34.806 Cannot find device "nvmf_tgt_br" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.806 Cannot find device "nvmf_tgt_br2" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:34.806 Cannot find device "nvmf_init_br" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:34.806 Cannot find device "nvmf_init_br2" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:34.806 Cannot find device "nvmf_tgt_br" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:34.806 Cannot find device "nvmf_tgt_br2" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:34.806 Cannot find device "nvmf_br" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:34.806 Cannot find device "nvmf_init_if" 00:18:34.806 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:35.066 Cannot find device "nvmf_init_if2" 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.066 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.325 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:35.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:18:35.326 00:18:35.326 --- 10.0.0.3 ping statistics --- 00:18:35.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.326 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:35.326 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:35.326 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.155 ms 00:18:35.326 00:18:35.326 --- 10.0.0.4 ping statistics --- 00:18:35.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.326 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:18:35.326 00:18:35.326 --- 10.0.0.1 ping statistics --- 00:18:35.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.326 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:35.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:18:35.326 00:18:35.326 --- 10.0.0.2 ping statistics --- 00:18:35.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.326 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=89139 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 89139 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89139 ']' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.326 15:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:35.326 [2024-11-20 15:26:21.713939] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:35.326 [2024-11-20 15:26:21.714053] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.586 [2024-11-20 15:26:21.869698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.586 [2024-11-20 15:26:21.953390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.586 [2024-11-20 15:26:21.953471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.586 [2024-11-20 15:26:21.953481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.586 [2024-11-20 15:26:21.953490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.586 [2024-11-20 15:26:21.953497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.586 [2024-11-20 15:26:21.954912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.586 [2024-11-20 15:26:21.955016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.586 [2024-11-20 15:26:21.955018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.523 15:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.523 15:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:36.523 15:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.523 15:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.523 15:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:36.523 15:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.523 15:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:36.523 [2024-11-20 15:26:22.944405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.523 15:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:36.782 Malloc0 00:18:36.782 15:26:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:37.042 15:26:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.302 15:26:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:37.599 [2024-11-20 15:26:23.976467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:37.599 15:26:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:37.858 [2024-11-20 15:26:24.200551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:37.858 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:38.118 [2024-11-20 15:26:24.436605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:38.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89252 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89252 /var/tmp/bdevperf.sock 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89252 ']' 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.118 15:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:39.056 15:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.056 15:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:39.056 15:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:39.315 NVMe0n1 00:18:39.574 15:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:39.834 00:18:39.834 15:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.834 15:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89298 00:18:39.834 15:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:40.772 15:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:41.033 [2024-11-20 15:26:27.346485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.346997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 [2024-11-20 15:26:27.347127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0830 is same with the state(6) to be set 00:18:41.033 15:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:44.324 15:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:44.324 00:18:44.324 15:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:44.583 15:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:47.892 15:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:47.892 [2024-11-20 15:26:34.164346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:47.892 15:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:48.831 15:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:49.091 [2024-11-20 15:26:35.420352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.091 [2024-11-20 15:26:35.420597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.420995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.421003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.421012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.421020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.421029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.421037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 [2024-11-20 15:26:35.421045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b630 is same with the state(6) to be set 00:18:49.092 15:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89298 00:18:55.695 { 00:18:55.695 "results": [ 00:18:55.695 { 00:18:55.695 "job": "NVMe0n1", 00:18:55.695 "core_mask": "0x1", 00:18:55.695 "workload": "verify", 00:18:55.695 "status": "finished", 00:18:55.695 "verify_range": { 00:18:55.695 "start": 0, 00:18:55.695 "length": 16384 00:18:55.695 }, 00:18:55.695 "queue_depth": 128, 00:18:55.695 "io_size": 4096, 00:18:55.695 "runtime": 15.006675, 00:18:55.695 "iops": 10352.726370098639, 00:18:55.695 "mibps": 40.44033738319781, 00:18:55.695 "io_failed": 3693, 00:18:55.695 "io_timeout": 0, 00:18:55.695 "avg_latency_us": 12054.675732958302, 00:18:55.695 "min_latency_us": 654.7020080321286, 00:18:55.695 "max_latency_us": 29478.04016064257 00:18:55.695 } 00:18:55.695 ], 00:18:55.695 "core_count": 1 00:18:55.695 } 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89252 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89252 ']' 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89252 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89252 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89252' 00:18:55.695 killing process with pid 89252 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89252 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89252 00:18:55.695 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:55.695 [2024-11-20 15:26:24.519309] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:18:55.695 [2024-11-20 15:26:24.519467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89252 ] 00:18:55.695 [2024-11-20 15:26:24.664323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.695 [2024-11-20 15:26:24.749292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.695 Running I/O for 15 seconds... 00:18:55.695 11413.00 IOPS, 44.58 MiB/s [2024-11-20T15:26:42.177Z] [2024-11-20 15:26:27.347805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.695 [2024-11-20 15:26:27.347870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.347886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.695 [2024-11-20 15:26:27.347899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.347912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.695 [2024-11-20 15:26:27.347925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.347938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.695 [2024-11-20 15:26:27.347952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.347965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347f30 is same with the state(6) to be set 00:18:55.695 [2024-11-20 15:26:27.348045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.695 [2024-11-20 15:26:27.348060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.348081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.695 [2024-11-20 15:26:27.348094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.348108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.695 [2024-11-20 15:26:27.348121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.348135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.695 [2024-11-20 15:26:27.348148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.348162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.695 [2024-11-20 15:26:27.348174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.348188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.695 [2024-11-20 15:26:27.348201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.348215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.695 [2024-11-20 15:26:27.348254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.695 [2024-11-20 15:26:27.348268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.695 [2024-11-20 15:26:27.348281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.348982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.348995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.696 [2024-11-20 15:26:27.349385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.696 [2024-11-20 15:26:27.349400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.697 [2024-11-20 15:26:27.349412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.697 [2024-11-20 15:26:27.349439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.697 [2024-11-20 15:26:27.349465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.697 [2024-11-20 15:26:27.349491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.697 [2024-11-20 15:26:27.349519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.697 [2024-11-20 15:26:27.349547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.697 [2024-11-20 15:26:27.349573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.349979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.349991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.697 [2024-11-20 15:26:27.350491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.697 [2024-11-20 15:26:27.350504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.350981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.350995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.698 [2024-11-20 15:26:27.351472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.698 [2024-11-20 15:26:27.351499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.698 [2024-11-20 15:26:27.351525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.698 [2024-11-20 15:26:27.351553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.698 [2024-11-20 15:26:27.351583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.698 [2024-11-20 15:26:27.351609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.698 [2024-11-20 15:26:27.351623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.699 [2024-11-20 15:26:27.351636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:27.351665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.699 [2024-11-20 15:26:27.351685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.699 [2024-11-20 15:26:27.351696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103784 len:8 PRP1 0x0 PRP2 0x0 00:18:55.699 [2024-11-20 15:26:27.351709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:27.351764] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:55.699 [2024-11-20 15:26:27.351782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:55.699 [2024-11-20 15:26:27.354519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:55.699 [2024-11-20 15:26:27.354559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1347f30 (9): Bad file descriptor 00:18:55.699 [2024-11-20 15:26:27.377012] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:55.699 10504.50 IOPS, 41.03 MiB/s [2024-11-20T15:26:42.181Z] 10292.67 IOPS, 40.21 MiB/s [2024-11-20T15:26:42.181Z] 10171.00 IOPS, 39.73 MiB/s [2024-11-20T15:26:42.181Z] [2024-11-20 15:26:30.910741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.910820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.910874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.910888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.910903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.910916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.910930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.910943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.910957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.910970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.910984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.910997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.699 [2024-11-20 15:26:30.911476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.699 [2024-11-20 15:26:30.911502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.699 [2024-11-20 15:26:30.911529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.699 [2024-11-20 15:26:30.911556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.699 [2024-11-20 15:26:30.911589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.699 [2024-11-20 15:26:30.911603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.699 [2024-11-20 15:26:30.911620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.911959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.911987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.700 [2024-11-20 15:26:30.912441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.700 [2024-11-20 15:26:30.912833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.700 [2024-11-20 15:26:30.912850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.912863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.912878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.912891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.912905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.912919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.912933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.912946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.912961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.912973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.912988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.913001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.913029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.913058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.913085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.913121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.913151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.913178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.701 [2024-11-20 15:26:30.913206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.701 [2024-11-20 15:26:30.913815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.701 [2024-11-20 15:26:30.913832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.913851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.913867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.913880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.913895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.913907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.913923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.913936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.913950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.913963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.913977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.913990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.702 [2024-11-20 15:26:30.914576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b63b0 is same with the state(6) to be set 00:18:55.702 [2024-11-20 15:26:30.914610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.702 [2024-11-20 15:26:30.914621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.702 [2024-11-20 15:26:30.914631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129120 len:8 PRP1 0x0 PRP2 0x0 00:18:55.702 [2024-11-20 15:26:30.914644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914711] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:55.702 [2024-11-20 15:26:30.914770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.702 [2024-11-20 15:26:30.914785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.702 [2024-11-20 15:26:30.914817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.702 [2024-11-20 15:26:30.914843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.702 [2024-11-20 15:26:30.914869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.702 [2024-11-20 15:26:30.914882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:55.702 [2024-11-20 15:26:30.917627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:55.702 [2024-11-20 15:26:30.917671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1347f30 (9): Bad file descriptor 00:18:55.702 [2024-11-20 15:26:30.941754] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:55.702 10035.00 IOPS, 39.20 MiB/s [2024-11-20T15:26:42.184Z] 10001.33 IOPS, 39.07 MiB/s [2024-11-20T15:26:42.184Z] 9969.00 IOPS, 38.94 MiB/s [2024-11-20T15:26:42.184Z] 9964.50 IOPS, 38.92 MiB/s [2024-11-20T15:26:42.184Z] 9943.44 IOPS, 38.84 MiB/s [2024-11-20T15:26:42.185Z] [2024-11-20 15:26:35.421976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.703 [2024-11-20 15:26:35.422033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.422971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.422986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.423000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.423014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.423027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.423042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.423056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.423070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.423083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.423097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.423110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.423124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.423137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.423151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.423164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.703 [2024-11-20 15:26:35.423178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.703 [2024-11-20 15:26:35.423191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.704 [2024-11-20 15:26:35.423522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.704 [2024-11-20 15:26:35.423940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.704 [2024-11-20 15:26:35.423954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.423972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.423986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.423999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.424976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.424988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.425003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.425022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.425037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.705 [2024-11-20 15:26:35.425050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.705 [2024-11-20 15:26:35.425064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.706 [2024-11-20 15:26:35.425322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100384 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100400 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100408 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100416 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99488 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99496 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99504 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99512 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99520 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99528 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.425871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.706 [2024-11-20 15:26:35.425881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.706 [2024-11-20 15:26:35.425890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:18:55.706 [2024-11-20 15:26:35.425903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.444115] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:55.706 [2024-11-20 15:26:35.444310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.706 [2024-11-20 15:26:35.444344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.444367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.706 [2024-11-20 15:26:35.444384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.444403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.706 [2024-11-20 15:26:35.444421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.444439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.706 [2024-11-20 15:26:35.444456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.706 [2024-11-20 15:26:35.444474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:55.706 [2024-11-20 15:26:35.444563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1347f30 (9): Bad file descriptor 00:18:55.707 [2024-11-20 15:26:35.448659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:55.707 [2024-11-20 15:26:35.471147] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:55.707 9983.00 IOPS, 39.00 MiB/s [2024-11-20T15:26:42.189Z] 10086.91 IOPS, 39.40 MiB/s [2024-11-20T15:26:42.189Z] 10155.42 IOPS, 39.67 MiB/s [2024-11-20T15:26:42.189Z] 10226.69 IOPS, 39.95 MiB/s [2024-11-20T15:26:42.189Z] 10297.57 IOPS, 40.22 MiB/s [2024-11-20T15:26:42.189Z] 10350.73 IOPS, 40.43 MiB/s 00:18:55.707 Latency(us) 00:18:55.707 [2024-11-20T15:26:42.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.707 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:55.707 Verification LBA range: start 0x0 length 0x4000 00:18:55.707 NVMe0n1 : 15.01 10352.73 40.44 246.09 0.00 12054.68 654.70 29478.04 00:18:55.707 [2024-11-20T15:26:42.189Z] =================================================================================================================== 00:18:55.707 [2024-11-20T15:26:42.189Z] Total : 10352.73 40.44 246.09 0.00 12054.68 654.70 29478.04 00:18:55.707 Received shutdown signal, test time was about 15.000000 seconds 00:18:55.707 00:18:55.707 Latency(us) 00:18:55.707 [2024-11-20T15:26:42.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.707 [2024-11-20T15:26:42.189Z] =================================================================================================================== 00:18:55.707 [2024-11-20T15:26:42.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89503 00:18:55.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89503 /var/tmp/bdevperf.sock 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89503 ']' 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.707 15:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:56.274 15:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.274 15:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:56.274 15:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:56.274 [2024-11-20 15:26:42.724874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:56.534 15:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:56.534 [2024-11-20 15:26:42.969059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:56.534 15:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:57.101 NVMe0n1 00:18:57.101 15:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:57.360 00:18:57.360 15:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:57.618 00:18:57.618 15:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.618 15:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:57.877 15:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:58.136 15:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:01.424 15:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:01.424 15:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:01.424 15:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89641 00:19:01.424 15:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:01.424 15:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89641 00:19:02.798 { 00:19:02.798 "results": [ 00:19:02.798 { 00:19:02.798 "job": "NVMe0n1", 00:19:02.798 "core_mask": "0x1", 00:19:02.798 "workload": "verify", 00:19:02.798 "status": "finished", 00:19:02.798 "verify_range": { 00:19:02.798 "start": 0, 00:19:02.798 "length": 16384 00:19:02.798 }, 00:19:02.798 "queue_depth": 128, 00:19:02.798 "io_size": 4096, 00:19:02.798 "runtime": 1.010305, 00:19:02.798 "iops": 10277.094540757495, 00:19:02.798 "mibps": 40.14490054983396, 00:19:02.798 "io_failed": 0, 00:19:02.798 "io_timeout": 0, 00:19:02.798 "avg_latency_us": 12405.49367699054, 00:19:02.798 "min_latency_us": 1677.879518072289, 00:19:02.798 "max_latency_us": 13686.232931726907 00:19:02.798 } 00:19:02.799 ], 00:19:02.799 "core_count": 1 00:19:02.799 } 00:19:02.799 15:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:02.799 [2024-11-20 15:26:41.517404] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:02.799 [2024-11-20 15:26:41.517538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89503 ] 00:19:02.799 [2024-11-20 15:26:41.670903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.799 [2024-11-20 15:26:41.755762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.799 [2024-11-20 15:26:44.533493] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:02.799 [2024-11-20 15:26:44.533634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.799 [2024-11-20 15:26:44.533654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.799 [2024-11-20 15:26:44.533671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.799 [2024-11-20 15:26:44.533696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.799 [2024-11-20 15:26:44.533710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.799 [2024-11-20 15:26:44.533724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.799 [2024-11-20 15:26:44.533737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.799 [2024-11-20 15:26:44.533750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.799 [2024-11-20 15:26:44.533765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:02.799 [2024-11-20 15:26:44.533813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:02.799 [2024-11-20 15:26:44.533839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x647f30 (9): Bad file descriptor 00:19:02.799 [2024-11-20 15:26:44.537980] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:02.799 Running I/O for 1 seconds... 00:19:02.799 10254.00 IOPS, 40.05 MiB/s 00:19:02.799 Latency(us) 00:19:02.799 [2024-11-20T15:26:49.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.799 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:02.799 Verification LBA range: start 0x0 length 0x4000 00:19:02.799 NVMe0n1 : 1.01 10277.09 40.14 0.00 0.00 12405.49 1677.88 13686.23 00:19:02.799 [2024-11-20T15:26:49.281Z] =================================================================================================================== 00:19:02.799 [2024-11-20T15:26:49.281Z] Total : 10277.09 40.14 0.00 0.00 12405.49 1677.88 13686.23 00:19:02.799 15:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:02.799 15:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:02.799 15:26:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:03.058 15:26:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.058 15:26:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:03.316 15:26:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:03.574 15:26:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:06.856 15:26:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:06.856 15:26:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89503 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89503 ']' 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89503 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89503 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.856 killing process with pid 89503 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89503' 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89503 00:19:06.856 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89503 00:19:07.114 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:07.114 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.372 rmmod nvme_tcp 00:19:07.372 rmmod nvme_fabrics 00:19:07.372 rmmod nvme_keyring 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 89139 ']' 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 89139 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89139 ']' 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89139 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89139 00:19:07.372 killing process with pid 89139 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89139' 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89139 00:19:07.372 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89139 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:07.631 15:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:07.631 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:07.631 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.631 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:07.631 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:07.631 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:07.631 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:07.631 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:07.891 00:19:07.891 real 0m33.469s 00:19:07.891 user 2m6.597s 00:19:07.891 sys 0m6.404s 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:07.891 ************************************ 00:19:07.891 END TEST nvmf_failover 00:19:07.891 ************************************ 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.891 ************************************ 00:19:07.891 START TEST nvmf_host_discovery 00:19:07.891 ************************************ 00:19:07.891 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:08.151 * Looking for test storage... 00:19:08.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:08.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.151 --rc genhtml_branch_coverage=1 00:19:08.151 --rc genhtml_function_coverage=1 00:19:08.151 --rc genhtml_legend=1 00:19:08.151 --rc geninfo_all_blocks=1 00:19:08.151 --rc geninfo_unexecuted_blocks=1 00:19:08.151 00:19:08.151 ' 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:08.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.151 --rc genhtml_branch_coverage=1 00:19:08.151 --rc genhtml_function_coverage=1 00:19:08.151 --rc genhtml_legend=1 00:19:08.151 --rc geninfo_all_blocks=1 00:19:08.151 --rc geninfo_unexecuted_blocks=1 00:19:08.151 00:19:08.151 ' 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:08.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.151 --rc genhtml_branch_coverage=1 00:19:08.151 --rc genhtml_function_coverage=1 00:19:08.151 --rc genhtml_legend=1 00:19:08.151 --rc geninfo_all_blocks=1 00:19:08.151 --rc geninfo_unexecuted_blocks=1 00:19:08.151 00:19:08.151 ' 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:08.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.151 --rc genhtml_branch_coverage=1 00:19:08.151 --rc genhtml_function_coverage=1 00:19:08.151 --rc genhtml_legend=1 00:19:08.151 --rc geninfo_all_blocks=1 00:19:08.151 --rc geninfo_unexecuted_blocks=1 00:19:08.151 00:19:08.151 ' 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.151 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:08.152 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.412 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:08.412 Cannot find device "nvmf_init_br" 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:08.412 Cannot find device "nvmf_init_br2" 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:08.412 Cannot find device "nvmf_tgt_br" 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:08.412 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:08.413 Cannot find device "nvmf_tgt_br2" 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:08.413 Cannot find device "nvmf_init_br" 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:08.413 Cannot find device "nvmf_init_br2" 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:08.413 Cannot find device "nvmf_tgt_br" 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:08.413 Cannot find device "nvmf_tgt_br2" 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:08.413 Cannot find device "nvmf_br" 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:08.413 Cannot find device "nvmf_init_if" 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:08.413 Cannot find device "nvmf_init_if2" 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:08.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:08.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:08.413 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:08.672 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:08.672 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:08.672 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:08.672 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:08.672 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:08.672 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:08.672 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:08.673 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:08.673 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:08.673 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:08.673 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:08.673 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:08.673 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:08.932 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:08.932 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.143 ms 00:19:08.932 00:19:08.932 --- 10.0.0.3 ping statistics --- 00:19:08.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.932 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:08.932 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:08.932 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.109 ms 00:19:08.932 00:19:08.932 --- 10.0.0.4 ping statistics --- 00:19:08.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.932 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:08.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:19:08.932 00:19:08.932 --- 10.0.0.1 ping statistics --- 00:19:08.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.932 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:08.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:19:08.932 00:19:08.932 --- 10.0.0.2 ping statistics --- 00:19:08.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.932 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=89999 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 89999 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89999 ']' 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.932 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.933 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.933 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.933 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.933 [2024-11-20 15:26:55.304009] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:08.933 [2024-11-20 15:26:55.304080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.192 [2024-11-20 15:26:55.439572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.192 [2024-11-20 15:26:55.487235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.192 [2024-11-20 15:26:55.487287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.192 [2024-11-20 15:26:55.487297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.192 [2024-11-20 15:26:55.487305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.192 [2024-11-20 15:26:55.487312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.192 [2024-11-20 15:26:55.487602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.760 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.760 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:09.760 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.760 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.760 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 [2024-11-20 15:26:56.279151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 [2024-11-20 15:26:56.291271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 null0 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 null1 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90049 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90049 /tmp/host.sock 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 90049 ']' 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:10.019 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.019 15:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 [2024-11-20 15:26:56.390094] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:10.019 [2024-11-20 15:26:56.390161] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90049 ] 00:19:10.278 [2024-11-20 15:26:56.542974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.278 [2024-11-20 15:26:56.592409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.215 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:11.216 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.475 [2024-11-20 15:26:57.705907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:11.475 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:19:11.476 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:19:12.043 [2024-11-20 15:26:58.368705] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:12.043 [2024-11-20 15:26:58.368751] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:12.043 [2024-11-20 15:26:58.368767] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:12.043 [2024-11-20 15:26:58.456743] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:12.043 [2024-11-20 15:26:58.518099] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:12.043 [2024-11-20 15:26:58.518927] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xfd5ba0:1 started. 00:19:12.043 [2024-11-20 15:26:58.520619] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:12.043 [2024-11-20 15:26:58.520644] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:12.303 [2024-11-20 15:26:58.527078] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xfd5ba0 was disconnected and freed. delete nvme_qpair. 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:12.563 15:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:12.563 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.824 [2024-11-20 15:26:59.118390] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf503a0:1 started. 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:12.824 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:12.825 [2024-11-20 15:26:59.126103] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf503a0 was disconnected and freed. delete nvme_qpair. 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.825 [2024-11-20 15:26:59.227967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:12.825 [2024-11-20 15:26:59.228543] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:12.825 [2024-11-20 15:26:59.228572] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:12.825 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.155 [2024-11-20 15:26:59.314442] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.155 [2024-11-20 15:26:59.372799] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:19:13.155 [2024-11-20 15:26:59.372850] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:13.155 [2024-11-20 15:26:59.372860] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:13.155 [2024-11-20 15:26:59.372867] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:13.155 15:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.091 [2024-11-20 15:27:00.494803] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:14.091 [2024-11-20 15:27:00.494850] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:14.091 [2024-11-20 15:27:00.495107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.091 [2024-11-20 15:27:00.495137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.091 [2024-11-20 15:27:00.495148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.091 [2024-11-20 15:27:00.495156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.091 [2024-11-20 15:27:00.495166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.091 [2024-11-20 15:27:00.495175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.091 [2024-11-20 15:27:00.495185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.091 [2024-11-20 15:27:00.495194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.091 [2024-11-20 15:27:00.495203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8280 is same with the state(6) to be set 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:14.091 [2024-11-20 15:27:00.505041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa8280 (9): Bad file descriptor 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.091 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:14.091 [2024-11-20 15:27:00.515044] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:14.091 [2024-11-20 15:27:00.515065] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:14.091 [2024-11-20 15:27:00.515075] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:14.091 [2024-11-20 15:27:00.515081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:14.091 [2024-11-20 15:27:00.515111] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:14.091 [2024-11-20 15:27:00.515195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.092 [2024-11-20 15:27:00.515213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa8280 with addr=10.0.0.3, port=4420 00:19:14.092 [2024-11-20 15:27:00.515224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8280 is same with the state(6) to be set 00:19:14.092 [2024-11-20 15:27:00.515240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa8280 (9): Bad file descriptor 00:19:14.092 [2024-11-20 15:27:00.515253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:14.092 [2024-11-20 15:27:00.515262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:14.092 [2024-11-20 15:27:00.515274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:14.092 [2024-11-20 15:27:00.515282] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:14.092 [2024-11-20 15:27:00.515289] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:14.092 [2024-11-20 15:27:00.515294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:14.092 [2024-11-20 15:27:00.525104] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:14.092 [2024-11-20 15:27:00.525124] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:14.092 [2024-11-20 15:27:00.525130] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.525135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:14.092 [2024-11-20 15:27:00.525158] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.525203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.092 [2024-11-20 15:27:00.525217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa8280 with addr=10.0.0.3, port=4420 00:19:14.092 [2024-11-20 15:27:00.525227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8280 is same with the state(6) to be set 00:19:14.092 [2024-11-20 15:27:00.525240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa8280 (9): Bad file descriptor 00:19:14.092 [2024-11-20 15:27:00.525252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:14.092 [2024-11-20 15:27:00.525261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:14.092 [2024-11-20 15:27:00.525271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:14.092 [2024-11-20 15:27:00.525279] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:14.092 [2024-11-20 15:27:00.525284] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:14.092 [2024-11-20 15:27:00.525290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.092 [2024-11-20 15:27:00.535150] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:14.092 [2024-11-20 15:27:00.535164] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:14.092 [2024-11-20 15:27:00.535169] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.535174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:14.092 [2024-11-20 15:27:00.535197] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.535239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.092 [2024-11-20 15:27:00.535254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa8280 with addr=10.0.0.3, port=4420 00:19:14.092 [2024-11-20 15:27:00.535264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8280 is same with the state(6) to be set 00:19:14.092 [2024-11-20 15:27:00.535276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa8280 (9): Bad file descriptor 00:19:14.092 [2024-11-20 15:27:00.535288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:14.092 [2024-11-20 15:27:00.535297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:14.092 [2024-11-20 15:27:00.535306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:14.092 [2024-11-20 15:27:00.535314] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:14.092 [2024-11-20 15:27:00.535319] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:14.092 [2024-11-20 15:27:00.535324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:14.092 [2024-11-20 15:27:00.545190] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:14.092 [2024-11-20 15:27:00.545210] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:14.092 [2024-11-20 15:27:00.545216] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.545222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:14.092 [2024-11-20 15:27:00.545245] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.545287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.092 [2024-11-20 15:27:00.545302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa8280 with addr=10.0.0.3, port=4420 00:19:14.092 [2024-11-20 15:27:00.545312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8280 is same with the state(6) to be set 00:19:14.092 [2024-11-20 15:27:00.545326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa8280 (9): Bad file descriptor 00:19:14.092 [2024-11-20 15:27:00.545338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:14.092 [2024-11-20 15:27:00.545347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:14.092 [2024-11-20 15:27:00.545357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:14.092 [2024-11-20 15:27:00.545365] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:14.092 [2024-11-20 15:27:00.545371] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:14.092 [2024-11-20 15:27:00.545376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:14.092 [2024-11-20 15:27:00.555236] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:14.092 [2024-11-20 15:27:00.555254] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:14.092 [2024-11-20 15:27:00.555260] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.555266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:14.092 [2024-11-20 15:27:00.555288] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.555326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.092 [2024-11-20 15:27:00.555340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa8280 with addr=10.0.0.3, port=4420 00:19:14.092 [2024-11-20 15:27:00.555351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8280 is same with the state(6) to be set 00:19:14.092 [2024-11-20 15:27:00.555364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa8280 (9): Bad file descriptor 00:19:14.092 [2024-11-20 15:27:00.555376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:14.092 [2024-11-20 15:27:00.555385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:14.092 [2024-11-20 15:27:00.555394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:14.092 [2024-11-20 15:27:00.555401] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:14.092 [2024-11-20 15:27:00.555407] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:14.092 [2024-11-20 15:27:00.555412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.092 [2024-11-20 15:27:00.565280] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:14.092 [2024-11-20 15:27:00.565293] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:14.092 [2024-11-20 15:27:00.565299] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.565304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:14.092 [2024-11-20 15:27:00.565324] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:14.092 [2024-11-20 15:27:00.565361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.092 [2024-11-20 15:27:00.565374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa8280 with addr=10.0.0.3, port=4420 00:19:14.092 [2024-11-20 15:27:00.565384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8280 is same with the state(6) to be set 00:19:14.092 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:14.093 [2024-11-20 15:27:00.565397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa8280 (9): Bad file descriptor 00:19:14.093 [2024-11-20 15:27:00.565410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:14.093 [2024-11-20 15:27:00.565419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:14.093 [2024-11-20 15:27:00.565429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:14.093 [2024-11-20 15:27:00.565436] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:14.093 [2024-11-20 15:27:00.565441] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:14.093 [2024-11-20 15:27:00.565446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:14.093 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:14.352 [2024-11-20 15:27:00.575316] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:14.352 [2024-11-20 15:27:00.575336] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:14.352 [2024-11-20 15:27:00.575342] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:14.352 [2024-11-20 15:27:00.575348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:14.352 [2024-11-20 15:27:00.575370] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:14.352 [2024-11-20 15:27:00.575413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.352 [2024-11-20 15:27:00.575428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa8280 with addr=10.0.0.3, port=4420 00:19:14.352 [2024-11-20 15:27:00.575438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8280 is same with the state(6) to be set 00:19:14.352 [2024-11-20 15:27:00.575452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa8280 (9): Bad file descriptor 00:19:14.352 [2024-11-20 15:27:00.575464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:14.352 [2024-11-20 15:27:00.575473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:14.352 [2024-11-20 15:27:00.575482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:14.352 [2024-11-20 15:27:00.575490] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:14.352 [2024-11-20 15:27:00.575495] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:14.352 [2024-11-20 15:27:00.575500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:14.352 [2024-11-20 15:27:00.581788] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:14.352 [2024-11-20 15:27:00.581816] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.352 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:14.353 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.612 15:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.547 [2024-11-20 15:27:01.913439] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:15.547 [2024-11-20 15:27:01.913493] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:15.547 [2024-11-20 15:27:01.913512] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:15.547 [2024-11-20 15:27:01.999397] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:15.807 [2024-11-20 15:27:02.057871] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:19:15.807 [2024-11-20 15:27:02.058735] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xfbeb80:1 started. 00:19:15.807 [2024-11-20 15:27:02.061123] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:15.807 [2024-11-20 15:27:02.061175] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:15.807 [2024-11-20 15:27:02.062415] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xfbeb80 was disconnected and freed. delete nvme_qpair. 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.807 2024/11/20 15:27:02 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:19:15.807 request: 00:19:15.807 { 00:19:15.807 "method": "bdev_nvme_start_discovery", 00:19:15.807 "params": { 00:19:15.807 "name": "nvme", 00:19:15.807 "trtype": "tcp", 00:19:15.807 "traddr": "10.0.0.3", 00:19:15.807 "adrfam": "ipv4", 00:19:15.807 "trsvcid": "8009", 00:19:15.807 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:15.807 "wait_for_attach": true 00:19:15.807 } 00:19:15.807 } 00:19:15.807 Got JSON-RPC error response 00:19:15.807 GoRPCClient: error on JSON-RPC call 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:15.807 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.808 2024/11/20 15:27:02 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:19:15.808 request: 00:19:15.808 { 00:19:15.808 "method": "bdev_nvme_start_discovery", 00:19:15.808 "params": { 00:19:15.808 "name": "nvme_second", 00:19:15.808 "trtype": "tcp", 00:19:15.808 "traddr": "10.0.0.3", 00:19:15.808 "adrfam": "ipv4", 00:19:15.808 "trsvcid": "8009", 00:19:15.808 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:15.808 "wait_for_attach": true 00:19:15.808 } 00:19:15.808 } 00:19:15.808 Got JSON-RPC error response 00:19:15.808 GoRPCClient: error on JSON-RPC call 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:15.808 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.067 15:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.122 [2024-11-20 15:27:03.343479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.122 [2024-11-20 15:27:03.343568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa85b0 with addr=10.0.0.3, port=8010 00:19:17.122 [2024-11-20 15:27:03.343597] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:17.122 [2024-11-20 15:27:03.343608] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:17.122 [2024-11-20 15:27:03.343619] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:18.058 [2024-11-20 15:27:04.341883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.058 [2024-11-20 15:27:04.341970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbef70 with addr=10.0.0.3, port=8010 00:19:18.058 [2024-11-20 15:27:04.341999] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:18.058 [2024-11-20 15:27:04.342010] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:18.058 [2024-11-20 15:27:04.342021] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:18.994 [2024-11-20 15:27:05.340056] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:18.994 2024/11/20 15:27:05 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:19:18.994 request: 00:19:18.994 { 00:19:18.994 "method": "bdev_nvme_start_discovery", 00:19:18.994 "params": { 00:19:18.994 "name": "nvme_second", 00:19:18.994 "trtype": "tcp", 00:19:18.994 "traddr": "10.0.0.3", 00:19:18.994 "adrfam": "ipv4", 00:19:18.994 "trsvcid": "8010", 00:19:18.994 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:18.994 "wait_for_attach": false, 00:19:18.994 "attach_timeout_ms": 3000 00:19:18.994 } 00:19:18.994 } 00:19:18.994 Got JSON-RPC error response 00:19:18.994 GoRPCClient: error on JSON-RPC call 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90049 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.994 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.253 rmmod nvme_tcp 00:19:19.253 rmmod nvme_fabrics 00:19:19.253 rmmod nvme_keyring 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 89999 ']' 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 89999 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 89999 ']' 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 89999 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89999 00:19:19.253 killing process with pid 89999 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89999' 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 89999 00:19:19.253 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 89999 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:19.512 15:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:19.771 00:19:19.771 real 0m11.709s 00:19:19.771 user 0m21.416s 00:19:19.771 sys 0m2.611s 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.771 ************************************ 00:19:19.771 END TEST nvmf_host_discovery 00:19:19.771 ************************************ 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.771 ************************************ 00:19:19.771 START TEST nvmf_host_multipath_status 00:19:19.771 ************************************ 00:19:19.771 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:20.031 * Looking for test storage... 00:19:20.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.031 --rc genhtml_branch_coverage=1 00:19:20.031 --rc genhtml_function_coverage=1 00:19:20.031 --rc genhtml_legend=1 00:19:20.031 --rc geninfo_all_blocks=1 00:19:20.031 --rc geninfo_unexecuted_blocks=1 00:19:20.031 00:19:20.031 ' 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.031 --rc genhtml_branch_coverage=1 00:19:20.031 --rc genhtml_function_coverage=1 00:19:20.031 --rc genhtml_legend=1 00:19:20.031 --rc geninfo_all_blocks=1 00:19:20.031 --rc geninfo_unexecuted_blocks=1 00:19:20.031 00:19:20.031 ' 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.031 --rc genhtml_branch_coverage=1 00:19:20.031 --rc genhtml_function_coverage=1 00:19:20.031 --rc genhtml_legend=1 00:19:20.031 --rc geninfo_all_blocks=1 00:19:20.031 --rc geninfo_unexecuted_blocks=1 00:19:20.031 00:19:20.031 ' 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.031 --rc genhtml_branch_coverage=1 00:19:20.031 --rc genhtml_function_coverage=1 00:19:20.031 --rc genhtml_legend=1 00:19:20.031 --rc geninfo_all_blocks=1 00:19:20.031 --rc geninfo_unexecuted_blocks=1 00:19:20.031 00:19:20.031 ' 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.031 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:20.032 Cannot find device "nvmf_init_br" 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:20.032 Cannot find device "nvmf_init_br2" 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:20.032 Cannot find device "nvmf_tgt_br" 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.032 Cannot find device "nvmf_tgt_br2" 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:20.032 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:20.293 Cannot find device "nvmf_init_br" 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:20.293 Cannot find device "nvmf_init_br2" 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:20.293 Cannot find device "nvmf_tgt_br" 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:20.293 Cannot find device "nvmf_tgt_br2" 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:20.293 Cannot find device "nvmf_br" 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:20.293 Cannot find device "nvmf_init_if" 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:20.293 Cannot find device "nvmf_init_if2" 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:20.293 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:20.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:20.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.165 ms 00:19:20.553 00:19:20.553 --- 10.0.0.3 ping statistics --- 00:19:20.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.553 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:20.553 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:20.553 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:19:20.553 00:19:20.553 --- 10.0.0.4 ping statistics --- 00:19:20.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.553 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:20.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:19:20.553 00:19:20.553 --- 10.0.0.1 ping statistics --- 00:19:20.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.553 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:20.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:19:20.553 00:19:20.553 --- 10.0.0.2 ping statistics --- 00:19:20.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.553 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.553 15:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=90597 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 90597 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90597 ']' 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.553 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:20.812 [2024-11-20 15:27:07.060749] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:20.812 [2024-11-20 15:27:07.060831] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.812 [2024-11-20 15:27:07.212314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:20.812 [2024-11-20 15:27:07.266848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.812 [2024-11-20 15:27:07.266902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.812 [2024-11-20 15:27:07.266912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.812 [2024-11-20 15:27:07.266920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.812 [2024-11-20 15:27:07.266927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.812 [2024-11-20 15:27:07.267914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.812 [2024-11-20 15:27:07.267914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.748 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.748 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:21.748 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.748 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.748 15:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:21.748 15:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.748 15:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90597 00:19:21.748 15:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:22.007 [2024-11-20 15:27:08.272855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.007 15:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:22.265 Malloc0 00:19:22.265 15:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:22.524 15:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:22.783 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:23.041 [2024-11-20 15:27:09.285024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:23.041 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:23.298 [2024-11-20 15:27:09.532986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90700 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90700 /var/tmp/bdevperf.sock 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90700 ']' 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.298 15:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:24.236 15:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.236 15:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:24.236 15:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:24.496 15:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:24.815 Nvme0n1 00:19:24.815 15:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:25.093 Nvme0n1 00:19:25.093 15:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:25.093 15:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:27.033 15:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:27.033 15:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:27.293 15:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:27.552 15:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:28.489 15:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:28.489 15:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:28.489 15:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.489 15:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:28.774 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.774 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:28.774 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.774 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:29.034 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:29.034 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:29.034 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.294 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:29.294 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.294 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:29.294 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.294 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:29.553 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.553 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:29.553 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.553 15:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:29.813 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.813 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:29.813 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.813 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:30.073 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.073 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:30.073 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:30.354 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:30.613 15:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:31.552 15:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:31.552 15:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:31.552 15:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:31.552 15:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.811 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:31.811 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:31.811 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.811 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:32.070 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.070 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:32.070 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.070 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:32.328 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.328 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:32.328 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.328 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:32.587 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.587 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:32.587 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.587 15:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:32.852 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.852 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:32.852 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.852 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:33.110 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.110 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:33.110 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:33.370 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:33.630 15:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:34.569 15:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:34.569 15:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:34.569 15:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.569 15:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:34.828 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.828 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:34.828 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:34.828 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.086 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:35.086 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:35.086 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:35.086 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.345 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.345 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:35.345 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.345 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:35.605 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.605 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:35.605 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.605 15:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:35.864 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.864 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:35.864 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:35.864 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.123 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.123 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:36.123 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:36.123 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:36.383 15:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:37.766 15:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:37.766 15:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:37.766 15:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.766 15:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:37.766 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.766 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:37.766 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:37.766 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.025 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:38.025 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:38.025 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.025 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:38.283 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.283 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:38.283 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.283 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:38.541 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.541 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:38.541 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.541 15:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:38.800 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.800 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:38.800 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.800 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:39.059 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.059 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:39.059 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:39.059 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:39.318 15:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:40.697 15:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:40.697 15:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:40.697 15:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.697 15:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:40.697 15:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:40.697 15:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:40.697 15:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.697 15:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:40.956 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:40.956 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:40.956 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:40.956 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.215 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:41.215 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:41.215 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.215 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:41.474 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:41.474 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:41.474 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.474 15:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:41.733 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:41.733 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:41.733 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.733 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:41.992 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:41.992 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:41.992 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:41.992 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:42.252 15:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:43.190 15:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:43.190 15:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:43.190 15:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.190 15:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:43.450 15:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:43.450 15:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:43.450 15:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.450 15:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:43.710 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.710 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:43.710 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:43.710 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.970 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.970 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:43.970 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.970 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:44.230 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.230 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:44.230 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:44.230 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:44.507 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:44.507 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:44.507 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:44.507 15:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:44.810 15:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.810 15:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:45.069 15:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:45.069 15:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:45.328 15:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:45.588 15:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:46.526 15:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:46.526 15:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:46.526 15:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.526 15:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:46.785 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.785 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:46.785 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:46.786 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.044 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.044 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:47.044 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.044 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:47.304 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.304 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:47.304 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.304 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:47.304 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.304 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:47.304 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.563 15:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:47.563 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.563 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:47.563 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.563 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:47.823 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.823 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:47.823 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:48.082 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:48.341 15:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:49.280 15:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:49.280 15:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:49.280 15:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.280 15:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:49.539 15:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:49.539 15:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:49.539 15:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:49.539 15:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.798 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.798 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:49.798 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:49.798 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:50.318 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.318 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:50.318 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:50.318 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.577 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.577 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:50.577 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.577 15:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:50.836 15:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.836 15:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:50.836 15:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:50.836 15:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:51.096 15:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:52.033 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:52.033 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:52.292 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.292 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:52.292 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.292 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:52.292 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:52.292 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.551 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.551 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:52.551 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.551 15:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.934 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.934 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.934 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.934 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:53.193 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.193 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:53.193 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.193 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:53.452 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.452 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:53.452 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.452 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.452 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.452 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:53.452 15:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:53.711 15:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:53.970 15:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:54.907 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:54.907 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:54.907 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.907 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:55.165 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.165 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:55.165 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.165 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:55.424 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:55.424 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:55.424 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.424 15:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:55.682 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.682 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:55.682 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.682 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:55.942 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.942 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:55.942 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.942 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:56.200 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.200 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:56.200 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:56.200 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90700 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90700 ']' 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90700 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90700 00:19:56.459 killing process with pid 90700 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90700' 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90700 00:19:56.459 15:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90700 00:19:56.459 { 00:19:56.459 "results": [ 00:19:56.459 { 00:19:56.459 "job": "Nvme0n1", 00:19:56.459 "core_mask": "0x4", 00:19:56.459 "workload": "verify", 00:19:56.459 "status": "terminated", 00:19:56.459 "verify_range": { 00:19:56.459 "start": 0, 00:19:56.459 "length": 16384 00:19:56.459 }, 00:19:56.459 "queue_depth": 128, 00:19:56.459 "io_size": 4096, 00:19:56.459 "runtime": 31.317655, 00:19:56.459 "iops": 10244.381324208342, 00:19:56.459 "mibps": 40.017114547688834, 00:19:56.459 "io_failed": 0, 00:19:56.459 "io_timeout": 0, 00:19:56.459 "avg_latency_us": 12471.741612642008, 00:19:56.459 "min_latency_us": 137.3558232931727, 00:19:56.459 "max_latency_us": 4042702.6506024096 00:19:56.459 } 00:19:56.459 ], 00:19:56.459 "core_count": 1 00:19:56.459 } 00:19:56.722 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90700 00:19:56.723 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:56.723 [2024-11-20 15:27:09.601749] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:56.723 [2024-11-20 15:27:09.601875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90700 ] 00:19:56.723 [2024-11-20 15:27:09.755975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.723 [2024-11-20 15:27:09.809893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.723 Running I/O for 90 seconds... 00:19:56.723 10716.00 IOPS, 41.86 MiB/s [2024-11-20T15:27:43.205Z] 10781.00 IOPS, 42.11 MiB/s [2024-11-20T15:27:43.205Z] 10756.00 IOPS, 42.02 MiB/s [2024-11-20T15:27:43.205Z] 10762.50 IOPS, 42.04 MiB/s [2024-11-20T15:27:43.205Z] 10771.00 IOPS, 42.07 MiB/s [2024-11-20T15:27:43.205Z] 10645.33 IOPS, 41.58 MiB/s [2024-11-20T15:27:43.205Z] 10557.57 IOPS, 41.24 MiB/s [2024-11-20T15:27:43.205Z] 10498.75 IOPS, 41.01 MiB/s [2024-11-20T15:27:43.205Z] 10506.89 IOPS, 41.04 MiB/s [2024-11-20T15:27:43.205Z] 10538.60 IOPS, 41.17 MiB/s [2024-11-20T15:27:43.205Z] 10554.64 IOPS, 41.23 MiB/s [2024-11-20T15:27:43.205Z] 10585.58 IOPS, 41.35 MiB/s [2024-11-20T15:27:43.205Z] 10609.00 IOPS, 41.44 MiB/s [2024-11-20T15:27:43.205Z] [2024-11-20 15:27:25.497163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.723 [2024-11-20 15:27:25.497842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.497875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.497906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.497937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.497969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.497987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.723 [2024-11-20 15:27:25.498348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:56.723 [2024-11-20 15:27:25.498368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.498383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.498654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.498692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.498769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.498786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.498808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.498825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.498848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.498862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.498883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.498898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.498918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.498931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.498952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.498966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.498987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.499967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.499987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.500001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:56.724 [2024-11-20 15:27:25.500023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.724 [2024-11-20 15:27:25.500037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.725 [2024-11-20 15:27:25.500931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.500966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.500987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:56.725 [2024-11-20 15:27:25.501818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.725 [2024-11-20 15:27:25.501832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.501861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.501878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.501903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.501919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.501944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.501961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.501985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.501999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.502023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.502039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.502064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.502078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.502102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.502115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.502141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.502155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.502180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.502193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.502218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.502231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:25.502266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:25.502282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:56.726 10573.29 IOPS, 41.30 MiB/s [2024-11-20T15:27:43.208Z] 9868.40 IOPS, 38.55 MiB/s [2024-11-20T15:27:43.208Z] 9251.62 IOPS, 36.14 MiB/s [2024-11-20T15:27:43.208Z] 8707.41 IOPS, 34.01 MiB/s [2024-11-20T15:27:43.208Z] 8265.83 IOPS, 32.29 MiB/s [2024-11-20T15:27:43.208Z] 8374.26 IOPS, 32.71 MiB/s [2024-11-20T15:27:43.208Z] 8507.95 IOPS, 33.23 MiB/s [2024-11-20T15:27:43.208Z] 8801.43 IOPS, 34.38 MiB/s [2024-11-20T15:27:43.208Z] 9086.23 IOPS, 35.49 MiB/s [2024-11-20T15:27:43.208Z] 9341.78 IOPS, 36.49 MiB/s [2024-11-20T15:27:43.208Z] 9423.62 IOPS, 36.81 MiB/s [2024-11-20T15:27:43.208Z] 9502.20 IOPS, 37.12 MiB/s [2024-11-20T15:27:43.208Z] 9571.04 IOPS, 37.39 MiB/s [2024-11-20T15:27:43.208Z] 9780.15 IOPS, 38.20 MiB/s [2024-11-20T15:27:43.208Z] 9968.11 IOPS, 38.94 MiB/s [2024-11-20T15:27:43.208Z] [2024-11-20 15:27:40.332409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.332493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.332561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.332593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.332624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.332654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.332706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.332739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.332772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.332805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.332837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.332897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.332929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.332961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.332980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.332993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.333025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.333058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.333089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.333121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.333151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.333182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.726 [2024-11-20 15:27:40.333213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.333244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.333282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.726 [2024-11-20 15:27:40.333315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:56.726 [2024-11-20 15:27:40.333333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.333347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.333366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.333380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.333399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.333412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.333431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.333445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.333465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.333478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.333500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.333513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.333533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.333550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.334698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.334731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.334758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.334773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.334793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.334807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.334826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.334840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.334872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.334886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.334904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.334918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.334938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.334951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.334970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.334983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.335279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.727 [2024-11-20 15:27:40.335312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.727 [2024-11-20 15:27:40.335506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:56.727 [2024-11-20 15:27:40.335525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.728 [2024-11-20 15:27:40.335538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:56.728 [2024-11-20 15:27:40.335558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.728 [2024-11-20 15:27:40.335570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:56.728 [2024-11-20 15:27:40.335589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.728 [2024-11-20 15:27:40.335603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:56.728 [2024-11-20 15:27:40.335621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.728 [2024-11-20 15:27:40.335635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:56.728 [2024-11-20 15:27:40.335653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.728 [2024-11-20 15:27:40.335693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:56.728 [2024-11-20 15:27:40.335713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.728 [2024-11-20 15:27:40.335727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:56.728 [2024-11-20 15:27:40.335744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.728 [2024-11-20 15:27:40.335758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:56.728 [2024-11-20 15:27:40.335776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.728 [2024-11-20 15:27:40.335789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:56.728 [2024-11-20 15:27:40.335807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.728 [2024-11-20 15:27:40.335822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:56.728 10107.03 IOPS, 39.48 MiB/s [2024-11-20T15:27:43.210Z] 10165.37 IOPS, 39.71 MiB/s [2024-11-20T15:27:43.210Z] 10226.16 IOPS, 39.95 MiB/s [2024-11-20T15:27:43.210Z] Received shutdown signal, test time was about 31.318335 seconds 00:19:56.728 00:19:56.728 Latency(us) 00:19:56.728 [2024-11-20T15:27:43.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.728 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.728 Verification LBA range: start 0x0 length 0x4000 00:19:56.728 Nvme0n1 : 31.32 10244.38 40.02 0.00 0.00 12471.74 137.36 4042702.65 00:19:56.728 [2024-11-20T15:27:43.210Z] =================================================================================================================== 00:19:56.728 [2024-11-20T15:27:43.210Z] Total : 10244.38 40.02 0.00 0.00 12471.74 137.36 4042702.65 00:19:56.728 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.987 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.987 rmmod nvme_tcp 00:19:56.987 rmmod nvme_fabrics 00:19:56.987 rmmod nvme_keyring 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 90597 ']' 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 90597 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90597 ']' 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90597 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90597 00:19:57.246 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.247 killing process with pid 90597 00:19:57.247 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.247 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90597' 00:19:57.247 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90597 00:19:57.247 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90597 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:57.506 15:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:57.765 00:19:57.765 real 0m38.003s 00:19:57.765 user 1m57.084s 00:19:57.765 sys 0m13.180s 00:19:57.765 ************************************ 00:19:57.765 END TEST nvmf_host_multipath_status 00:19:57.765 ************************************ 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.765 ************************************ 00:19:57.765 START TEST nvmf_discovery_remove_ifc 00:19:57.765 ************************************ 00:19:57.765 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:58.025 * Looking for test storage... 00:19:58.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.025 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:58.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.026 --rc genhtml_branch_coverage=1 00:19:58.026 --rc genhtml_function_coverage=1 00:19:58.026 --rc genhtml_legend=1 00:19:58.026 --rc geninfo_all_blocks=1 00:19:58.026 --rc geninfo_unexecuted_blocks=1 00:19:58.026 00:19:58.026 ' 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:58.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.026 --rc genhtml_branch_coverage=1 00:19:58.026 --rc genhtml_function_coverage=1 00:19:58.026 --rc genhtml_legend=1 00:19:58.026 --rc geninfo_all_blocks=1 00:19:58.026 --rc geninfo_unexecuted_blocks=1 00:19:58.026 00:19:58.026 ' 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:58.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.026 --rc genhtml_branch_coverage=1 00:19:58.026 --rc genhtml_function_coverage=1 00:19:58.026 --rc genhtml_legend=1 00:19:58.026 --rc geninfo_all_blocks=1 00:19:58.026 --rc geninfo_unexecuted_blocks=1 00:19:58.026 00:19:58.026 ' 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:58.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.026 --rc genhtml_branch_coverage=1 00:19:58.026 --rc genhtml_function_coverage=1 00:19:58.026 --rc genhtml_legend=1 00:19:58.026 --rc geninfo_all_blocks=1 00:19:58.026 --rc geninfo_unexecuted_blocks=1 00:19:58.026 00:19:58.026 ' 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.026 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:58.027 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:58.027 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:58.286 Cannot find device "nvmf_init_br" 00:19:58.286 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:58.286 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:58.286 Cannot find device "nvmf_init_br2" 00:19:58.286 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:58.286 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:58.286 Cannot find device "nvmf_tgt_br" 00:19:58.286 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:58.286 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.287 Cannot find device "nvmf_tgt_br2" 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:58.287 Cannot find device "nvmf_init_br" 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:58.287 Cannot find device "nvmf_init_br2" 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:58.287 Cannot find device "nvmf_tgt_br" 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:58.287 Cannot find device "nvmf_tgt_br2" 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:58.287 Cannot find device "nvmf_br" 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:58.287 Cannot find device "nvmf_init_if" 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:58.287 Cannot find device "nvmf_init_if2" 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.287 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:58.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:58.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:19:58.547 00:19:58.547 --- 10.0.0.3 ping statistics --- 00:19:58.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.547 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:58.547 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:58.547 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:19:58.547 00:19:58.547 --- 10.0.0.4 ping statistics --- 00:19:58.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.547 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:58.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:19:58.547 00:19:58.547 --- 10.0.0.1 ping statistics --- 00:19:58.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.547 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:58.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:19:58.547 00:19:58.547 --- 10.0.0.2 ping statistics --- 00:19:58.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.547 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:58.547 15:27:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=92046 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 92046 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 92046 ']' 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.547 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:58.807 [2024-11-20 15:27:45.072688] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:58.807 [2024-11-20 15:27:45.072765] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.807 [2024-11-20 15:27:45.214179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.807 [2024-11-20 15:27:45.282804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.807 [2024-11-20 15:27:45.282869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.807 [2024-11-20 15:27:45.282880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.807 [2024-11-20 15:27:45.282890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.807 [2024-11-20 15:27:45.282898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.807 [2024-11-20 15:27:45.283296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.746 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.746 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:59.746 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.746 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.746 15:27:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.746 [2024-11-20 15:27:46.033822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.746 [2024-11-20 15:27:46.042040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:59.746 null0 00:19:59.746 [2024-11-20 15:27:46.073855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92096 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92096 /tmp/host.sock 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 92096 ']' 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:59.746 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.746 15:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:59.746 [2024-11-20 15:27:46.154821] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:19:59.746 [2024-11-20 15:27:46.154895] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92096 ] 00:20:00.006 [2024-11-20 15:27:46.295609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.006 [2024-11-20 15:27:46.365058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.947 15:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:01.885 [2024-11-20 15:27:48.233438] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:01.885 [2024-11-20 15:27:48.233477] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:01.885 [2024-11-20 15:27:48.233493] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:01.885 [2024-11-20 15:27:48.319388] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:02.145 [2024-11-20 15:27:48.373731] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:02.145 [2024-11-20 15:27:48.374533] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21921c0:1 started. 00:20:02.145 [2024-11-20 15:27:48.376469] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:02.145 [2024-11-20 15:27:48.376537] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:02.145 [2024-11-20 15:27:48.376561] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:02.145 [2024-11-20 15:27:48.376578] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:02.145 [2024-11-20 15:27:48.376604] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:02.145 [2024-11-20 15:27:48.381724] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21921c0 was disconnected and freed. delete nvme_qpair. 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.145 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:02.146 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:02.146 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.146 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:02.146 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:02.146 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.146 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:02.146 15:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:03.084 15:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:04.463 15:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:05.402 15:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:06.338 15:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:07.275 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:07.275 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:07.275 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:07.275 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.275 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.275 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:07.275 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:07.275 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.535 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:07.535 15:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:07.535 [2024-11-20 15:27:53.806914] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:07.535 [2024-11-20 15:27:53.806987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.535 [2024-11-20 15:27:53.807002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.535 [2024-11-20 15:27:53.807014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.535 [2024-11-20 15:27:53.807024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.535 [2024-11-20 15:27:53.807033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.535 [2024-11-20 15:27:53.807042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.535 [2024-11-20 15:27:53.807052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.535 [2024-11-20 15:27:53.807061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.535 [2024-11-20 15:27:53.807071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.535 [2024-11-20 15:27:53.807080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.535 [2024-11-20 15:27:53.807089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f520 is same with the state(6) to be set 00:20:07.535 [2024-11-20 15:27:53.816893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216f520 (9): Bad file descriptor 00:20:07.535 [2024-11-20 15:27:53.826897] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:07.535 [2024-11-20 15:27:53.826916] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:07.535 [2024-11-20 15:27:53.826925] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:07.535 [2024-11-20 15:27:53.826931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:07.535 [2024-11-20 15:27:53.826970] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:08.474 [2024-11-20 15:27:54.890798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:08.474 [2024-11-20 15:27:54.890937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216f520 with addr=10.0.0.3, port=4420 00:20:08.474 [2024-11-20 15:27:54.890984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f520 is same with the state(6) to be set 00:20:08.474 [2024-11-20 15:27:54.891078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216f520 (9): Bad file descriptor 00:20:08.474 [2024-11-20 15:27:54.892188] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:20:08.474 [2024-11-20 15:27:54.892275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:08.474 [2024-11-20 15:27:54.892307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:08.474 [2024-11-20 15:27:54.892340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:08.474 [2024-11-20 15:27:54.892368] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:08.474 [2024-11-20 15:27:54.892389] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:08.474 [2024-11-20 15:27:54.892407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:08.474 [2024-11-20 15:27:54.892439] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:08.474 [2024-11-20 15:27:54.892457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:08.474 15:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:09.411 [2024-11-20 15:27:55.890938] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:09.411 [2024-11-20 15:27:55.890985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:09.411 [2024-11-20 15:27:55.891013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:09.411 [2024-11-20 15:27:55.891023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:09.411 [2024-11-20 15:27:55.891036] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:20:09.411 [2024-11-20 15:27:55.891047] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:09.411 [2024-11-20 15:27:55.891054] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:09.411 [2024-11-20 15:27:55.891060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:09.411 [2024-11-20 15:27:55.891099] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:09.411 [2024-11-20 15:27:55.891153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.411 [2024-11-20 15:27:55.891166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.411 [2024-11-20 15:27:55.891181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.411 [2024-11-20 15:27:55.891190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.411 [2024-11-20 15:27:55.891200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.411 [2024-11-20 15:27:55.891209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.411 [2024-11-20 15:27:55.891219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.411 [2024-11-20 15:27:55.891227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.411 [2024-11-20 15:27:55.891237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.411 [2024-11-20 15:27:55.891246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.411 [2024-11-20 15:27:55.891255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:20:09.671 [2024-11-20 15:27:55.891718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fc2d0 (9): Bad file descriptor 00:20:09.671 [2024-11-20 15:27:55.892729] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:09.671 [2024-11-20 15:27:55.892747] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:20:09.671 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:09.672 15:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:09.672 15:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:10.610 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.610 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.610 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.610 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.610 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.610 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.610 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.610 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.870 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:10.870 15:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.437 [2024-11-20 15:27:57.899100] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:11.437 [2024-11-20 15:27:57.899134] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:11.437 [2024-11-20 15:27:57.899151] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:11.695 [2024-11-20 15:27:57.985049] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:11.695 [2024-11-20 15:27:58.039345] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:20:11.695 [2024-11-20 15:27:58.039938] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2169010:1 started. 00:20:11.695 [2024-11-20 15:27:58.041163] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:11.695 [2024-11-20 15:27:58.041210] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:11.695 [2024-11-20 15:27:58.041231] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:11.695 [2024-11-20 15:27:58.041247] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:11.695 [2024-11-20 15:27:58.041256] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:11.695 [2024-11-20 15:27:58.047495] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2169010 was disconnected and freed. delete nvme_qpair. 00:20:11.695 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:11.695 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.695 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:11.695 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:11.695 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.695 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:11.695 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:11.696 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92096 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 92096 ']' 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 92096 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92096 00:20:11.954 killing process with pid 92096 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92096' 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 92096 00:20:11.954 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 92096 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.213 rmmod nvme_tcp 00:20:12.213 rmmod nvme_fabrics 00:20:12.213 rmmod nvme_keyring 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 92046 ']' 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 92046 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 92046 ']' 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 92046 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92046 00:20:12.213 killing process with pid 92046 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.213 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.214 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92046' 00:20:12.214 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 92046 00:20:12.214 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 92046 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:12.473 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:12.733 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:12.733 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.733 15:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:12.733 ************************************ 00:20:12.733 END TEST nvmf_discovery_remove_ifc 00:20:12.733 ************************************ 00:20:12.733 00:20:12.733 real 0m14.994s 00:20:12.733 user 0m25.079s 00:20:12.733 sys 0m2.849s 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.733 15:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.992 ************************************ 00:20:12.992 START TEST nvmf_identify_kernel_target 00:20:12.992 ************************************ 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:12.992 * Looking for test storage... 00:20:12.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.992 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:13.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.253 --rc genhtml_branch_coverage=1 00:20:13.253 --rc genhtml_function_coverage=1 00:20:13.253 --rc genhtml_legend=1 00:20:13.253 --rc geninfo_all_blocks=1 00:20:13.253 --rc geninfo_unexecuted_blocks=1 00:20:13.253 00:20:13.253 ' 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:13.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.253 --rc genhtml_branch_coverage=1 00:20:13.253 --rc genhtml_function_coverage=1 00:20:13.253 --rc genhtml_legend=1 00:20:13.253 --rc geninfo_all_blocks=1 00:20:13.253 --rc geninfo_unexecuted_blocks=1 00:20:13.253 00:20:13.253 ' 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:13.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.253 --rc genhtml_branch_coverage=1 00:20:13.253 --rc genhtml_function_coverage=1 00:20:13.253 --rc genhtml_legend=1 00:20:13.253 --rc geninfo_all_blocks=1 00:20:13.253 --rc geninfo_unexecuted_blocks=1 00:20:13.253 00:20:13.253 ' 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:13.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.253 --rc genhtml_branch_coverage=1 00:20:13.253 --rc genhtml_function_coverage=1 00:20:13.253 --rc genhtml_legend=1 00:20:13.253 --rc geninfo_all_blocks=1 00:20:13.253 --rc geninfo_unexecuted_blocks=1 00:20:13.253 00:20:13.253 ' 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.253 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.254 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:13.254 Cannot find device "nvmf_init_br" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:13.254 Cannot find device "nvmf_init_br2" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:13.254 Cannot find device "nvmf_tgt_br" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.254 Cannot find device "nvmf_tgt_br2" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:13.254 Cannot find device "nvmf_init_br" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:13.254 Cannot find device "nvmf_init_br2" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:13.254 Cannot find device "nvmf_tgt_br" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:13.254 Cannot find device "nvmf_tgt_br2" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:13.254 Cannot find device "nvmf_br" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:13.254 Cannot find device "nvmf_init_if" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:13.254 Cannot find device "nvmf_init_if2" 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:13.254 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:13.514 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:13.515 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:13.515 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.515 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:13.515 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:13.515 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.515 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:20:13.515 00:20:13.515 --- 10.0.0.3 ping statistics --- 00:20:13.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.515 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:13.515 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:13.774 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:13.774 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:20:13.774 00:20:13.774 --- 10.0.0.4 ping statistics --- 00:20:13.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.775 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:13.775 15:27:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:20:13.775 00:20:13.775 --- 10.0.0.1 ping statistics --- 00:20:13.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.775 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:13.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:20:13.775 00:20:13.775 --- 10.0.0.2 ping statistics --- 00:20:13.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.775 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:13.775 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:14.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:14.343 Waiting for block devices as requested 00:20:14.343 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:14.343 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:14.601 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:14.601 No valid GPT data, bailing 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:14.602 15:28:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:14.602 No valid GPT data, bailing 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:14.602 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:14.861 No valid GPT data, bailing 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:14.861 No valid GPT data, bailing 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -a 10.0.0.1 -t tcp -s 4420 00:20:14.861 00:20:14.861 Discovery Log Number of Records 2, Generation counter 2 00:20:14.861 =====Discovery Log Entry 0====== 00:20:14.861 trtype: tcp 00:20:14.861 adrfam: ipv4 00:20:14.861 subtype: current discovery subsystem 00:20:14.861 treq: not specified, sq flow control disable supported 00:20:14.861 portid: 1 00:20:14.861 trsvcid: 4420 00:20:14.861 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:14.861 traddr: 10.0.0.1 00:20:14.861 eflags: none 00:20:14.861 sectype: none 00:20:14.861 =====Discovery Log Entry 1====== 00:20:14.861 trtype: tcp 00:20:14.861 adrfam: ipv4 00:20:14.861 subtype: nvme subsystem 00:20:14.861 treq: not specified, sq flow control disable supported 00:20:14.861 portid: 1 00:20:14.861 trsvcid: 4420 00:20:14.861 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:14.861 traddr: 10.0.0.1 00:20:14.861 eflags: none 00:20:14.861 sectype: none 00:20:14.861 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:14.861 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:15.121 ===================================================== 00:20:15.121 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:15.121 ===================================================== 00:20:15.121 Controller Capabilities/Features 00:20:15.121 ================================ 00:20:15.121 Vendor ID: 0000 00:20:15.121 Subsystem Vendor ID: 0000 00:20:15.121 Serial Number: 0cf8deb84ff407fde061 00:20:15.121 Model Number: Linux 00:20:15.121 Firmware Version: 6.8.9-20 00:20:15.121 Recommended Arb Burst: 0 00:20:15.121 IEEE OUI Identifier: 00 00 00 00:20:15.121 Multi-path I/O 00:20:15.121 May have multiple subsystem ports: No 00:20:15.121 May have multiple controllers: No 00:20:15.121 Associated with SR-IOV VF: No 00:20:15.121 Max Data Transfer Size: Unlimited 00:20:15.121 Max Number of Namespaces: 0 00:20:15.121 Max Number of I/O Queues: 1024 00:20:15.121 NVMe Specification Version (VS): 1.3 00:20:15.121 NVMe Specification Version (Identify): 1.3 00:20:15.121 Maximum Queue Entries: 1024 00:20:15.121 Contiguous Queues Required: No 00:20:15.121 Arbitration Mechanisms Supported 00:20:15.121 Weighted Round Robin: Not Supported 00:20:15.121 Vendor Specific: Not Supported 00:20:15.121 Reset Timeout: 7500 ms 00:20:15.121 Doorbell Stride: 4 bytes 00:20:15.121 NVM Subsystem Reset: Not Supported 00:20:15.121 Command Sets Supported 00:20:15.121 NVM Command Set: Supported 00:20:15.121 Boot Partition: Not Supported 00:20:15.121 Memory Page Size Minimum: 4096 bytes 00:20:15.121 Memory Page Size Maximum: 4096 bytes 00:20:15.121 Persistent Memory Region: Not Supported 00:20:15.121 Optional Asynchronous Events Supported 00:20:15.121 Namespace Attribute Notices: Not Supported 00:20:15.121 Firmware Activation Notices: Not Supported 00:20:15.121 ANA Change Notices: Not Supported 00:20:15.121 PLE Aggregate Log Change Notices: Not Supported 00:20:15.121 LBA Status Info Alert Notices: Not Supported 00:20:15.121 EGE Aggregate Log Change Notices: Not Supported 00:20:15.121 Normal NVM Subsystem Shutdown event: Not Supported 00:20:15.121 Zone Descriptor Change Notices: Not Supported 00:20:15.121 Discovery Log Change Notices: Supported 00:20:15.121 Controller Attributes 00:20:15.121 128-bit Host Identifier: Not Supported 00:20:15.121 Non-Operational Permissive Mode: Not Supported 00:20:15.121 NVM Sets: Not Supported 00:20:15.121 Read Recovery Levels: Not Supported 00:20:15.121 Endurance Groups: Not Supported 00:20:15.121 Predictable Latency Mode: Not Supported 00:20:15.121 Traffic Based Keep ALive: Not Supported 00:20:15.121 Namespace Granularity: Not Supported 00:20:15.121 SQ Associations: Not Supported 00:20:15.121 UUID List: Not Supported 00:20:15.121 Multi-Domain Subsystem: Not Supported 00:20:15.121 Fixed Capacity Management: Not Supported 00:20:15.121 Variable Capacity Management: Not Supported 00:20:15.121 Delete Endurance Group: Not Supported 00:20:15.122 Delete NVM Set: Not Supported 00:20:15.122 Extended LBA Formats Supported: Not Supported 00:20:15.122 Flexible Data Placement Supported: Not Supported 00:20:15.122 00:20:15.122 Controller Memory Buffer Support 00:20:15.122 ================================ 00:20:15.122 Supported: No 00:20:15.122 00:20:15.122 Persistent Memory Region Support 00:20:15.122 ================================ 00:20:15.122 Supported: No 00:20:15.122 00:20:15.122 Admin Command Set Attributes 00:20:15.122 ============================ 00:20:15.122 Security Send/Receive: Not Supported 00:20:15.122 Format NVM: Not Supported 00:20:15.122 Firmware Activate/Download: Not Supported 00:20:15.122 Namespace Management: Not Supported 00:20:15.122 Device Self-Test: Not Supported 00:20:15.122 Directives: Not Supported 00:20:15.122 NVMe-MI: Not Supported 00:20:15.122 Virtualization Management: Not Supported 00:20:15.122 Doorbell Buffer Config: Not Supported 00:20:15.122 Get LBA Status Capability: Not Supported 00:20:15.122 Command & Feature Lockdown Capability: Not Supported 00:20:15.122 Abort Command Limit: 1 00:20:15.122 Async Event Request Limit: 1 00:20:15.122 Number of Firmware Slots: N/A 00:20:15.122 Firmware Slot 1 Read-Only: N/A 00:20:15.122 Firmware Activation Without Reset: N/A 00:20:15.122 Multiple Update Detection Support: N/A 00:20:15.122 Firmware Update Granularity: No Information Provided 00:20:15.122 Per-Namespace SMART Log: No 00:20:15.122 Asymmetric Namespace Access Log Page: Not Supported 00:20:15.122 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:15.122 Command Effects Log Page: Not Supported 00:20:15.122 Get Log Page Extended Data: Supported 00:20:15.122 Telemetry Log Pages: Not Supported 00:20:15.122 Persistent Event Log Pages: Not Supported 00:20:15.122 Supported Log Pages Log Page: May Support 00:20:15.122 Commands Supported & Effects Log Page: Not Supported 00:20:15.122 Feature Identifiers & Effects Log Page:May Support 00:20:15.122 NVMe-MI Commands & Effects Log Page: May Support 00:20:15.122 Data Area 4 for Telemetry Log: Not Supported 00:20:15.122 Error Log Page Entries Supported: 1 00:20:15.122 Keep Alive: Not Supported 00:20:15.122 00:20:15.122 NVM Command Set Attributes 00:20:15.122 ========================== 00:20:15.122 Submission Queue Entry Size 00:20:15.122 Max: 1 00:20:15.122 Min: 1 00:20:15.122 Completion Queue Entry Size 00:20:15.122 Max: 1 00:20:15.122 Min: 1 00:20:15.122 Number of Namespaces: 0 00:20:15.122 Compare Command: Not Supported 00:20:15.122 Write Uncorrectable Command: Not Supported 00:20:15.122 Dataset Management Command: Not Supported 00:20:15.122 Write Zeroes Command: Not Supported 00:20:15.122 Set Features Save Field: Not Supported 00:20:15.122 Reservations: Not Supported 00:20:15.122 Timestamp: Not Supported 00:20:15.122 Copy: Not Supported 00:20:15.122 Volatile Write Cache: Not Present 00:20:15.122 Atomic Write Unit (Normal): 1 00:20:15.122 Atomic Write Unit (PFail): 1 00:20:15.122 Atomic Compare & Write Unit: 1 00:20:15.122 Fused Compare & Write: Not Supported 00:20:15.122 Scatter-Gather List 00:20:15.122 SGL Command Set: Supported 00:20:15.122 SGL Keyed: Not Supported 00:20:15.122 SGL Bit Bucket Descriptor: Not Supported 00:20:15.122 SGL Metadata Pointer: Not Supported 00:20:15.122 Oversized SGL: Not Supported 00:20:15.122 SGL Metadata Address: Not Supported 00:20:15.122 SGL Offset: Supported 00:20:15.122 Transport SGL Data Block: Not Supported 00:20:15.122 Replay Protected Memory Block: Not Supported 00:20:15.122 00:20:15.122 Firmware Slot Information 00:20:15.122 ========================= 00:20:15.122 Active slot: 0 00:20:15.122 00:20:15.122 00:20:15.122 Error Log 00:20:15.122 ========= 00:20:15.122 00:20:15.122 Active Namespaces 00:20:15.122 ================= 00:20:15.122 Discovery Log Page 00:20:15.122 ================== 00:20:15.122 Generation Counter: 2 00:20:15.122 Number of Records: 2 00:20:15.122 Record Format: 0 00:20:15.122 00:20:15.122 Discovery Log Entry 0 00:20:15.122 ---------------------- 00:20:15.122 Transport Type: 3 (TCP) 00:20:15.122 Address Family: 1 (IPv4) 00:20:15.122 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:15.122 Entry Flags: 00:20:15.122 Duplicate Returned Information: 0 00:20:15.122 Explicit Persistent Connection Support for Discovery: 0 00:20:15.122 Transport Requirements: 00:20:15.122 Secure Channel: Not Specified 00:20:15.122 Port ID: 1 (0x0001) 00:20:15.122 Controller ID: 65535 (0xffff) 00:20:15.122 Admin Max SQ Size: 32 00:20:15.122 Transport Service Identifier: 4420 00:20:15.122 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:15.122 Transport Address: 10.0.0.1 00:20:15.122 Discovery Log Entry 1 00:20:15.122 ---------------------- 00:20:15.122 Transport Type: 3 (TCP) 00:20:15.122 Address Family: 1 (IPv4) 00:20:15.122 Subsystem Type: 2 (NVM Subsystem) 00:20:15.122 Entry Flags: 00:20:15.122 Duplicate Returned Information: 0 00:20:15.122 Explicit Persistent Connection Support for Discovery: 0 00:20:15.122 Transport Requirements: 00:20:15.122 Secure Channel: Not Specified 00:20:15.122 Port ID: 1 (0x0001) 00:20:15.122 Controller ID: 65535 (0xffff) 00:20:15.122 Admin Max SQ Size: 32 00:20:15.122 Transport Service Identifier: 4420 00:20:15.122 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:15.122 Transport Address: 10.0.0.1 00:20:15.122 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:15.382 get_feature(0x01) failed 00:20:15.383 get_feature(0x02) failed 00:20:15.383 get_feature(0x04) failed 00:20:15.383 ===================================================== 00:20:15.383 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:15.383 ===================================================== 00:20:15.383 Controller Capabilities/Features 00:20:15.383 ================================ 00:20:15.383 Vendor ID: 0000 00:20:15.383 Subsystem Vendor ID: 0000 00:20:15.383 Serial Number: d94ee2020de4550d1fb5 00:20:15.383 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:15.383 Firmware Version: 6.8.9-20 00:20:15.383 Recommended Arb Burst: 6 00:20:15.383 IEEE OUI Identifier: 00 00 00 00:20:15.383 Multi-path I/O 00:20:15.383 May have multiple subsystem ports: Yes 00:20:15.383 May have multiple controllers: Yes 00:20:15.383 Associated with SR-IOV VF: No 00:20:15.383 Max Data Transfer Size: Unlimited 00:20:15.383 Max Number of Namespaces: 1024 00:20:15.383 Max Number of I/O Queues: 128 00:20:15.383 NVMe Specification Version (VS): 1.3 00:20:15.383 NVMe Specification Version (Identify): 1.3 00:20:15.383 Maximum Queue Entries: 1024 00:20:15.383 Contiguous Queues Required: No 00:20:15.383 Arbitration Mechanisms Supported 00:20:15.383 Weighted Round Robin: Not Supported 00:20:15.383 Vendor Specific: Not Supported 00:20:15.383 Reset Timeout: 7500 ms 00:20:15.383 Doorbell Stride: 4 bytes 00:20:15.383 NVM Subsystem Reset: Not Supported 00:20:15.383 Command Sets Supported 00:20:15.383 NVM Command Set: Supported 00:20:15.383 Boot Partition: Not Supported 00:20:15.383 Memory Page Size Minimum: 4096 bytes 00:20:15.383 Memory Page Size Maximum: 4096 bytes 00:20:15.383 Persistent Memory Region: Not Supported 00:20:15.383 Optional Asynchronous Events Supported 00:20:15.383 Namespace Attribute Notices: Supported 00:20:15.383 Firmware Activation Notices: Not Supported 00:20:15.383 ANA Change Notices: Supported 00:20:15.383 PLE Aggregate Log Change Notices: Not Supported 00:20:15.383 LBA Status Info Alert Notices: Not Supported 00:20:15.383 EGE Aggregate Log Change Notices: Not Supported 00:20:15.383 Normal NVM Subsystem Shutdown event: Not Supported 00:20:15.383 Zone Descriptor Change Notices: Not Supported 00:20:15.383 Discovery Log Change Notices: Not Supported 00:20:15.383 Controller Attributes 00:20:15.383 128-bit Host Identifier: Supported 00:20:15.383 Non-Operational Permissive Mode: Not Supported 00:20:15.383 NVM Sets: Not Supported 00:20:15.383 Read Recovery Levels: Not Supported 00:20:15.383 Endurance Groups: Not Supported 00:20:15.383 Predictable Latency Mode: Not Supported 00:20:15.383 Traffic Based Keep ALive: Supported 00:20:15.383 Namespace Granularity: Not Supported 00:20:15.383 SQ Associations: Not Supported 00:20:15.383 UUID List: Not Supported 00:20:15.383 Multi-Domain Subsystem: Not Supported 00:20:15.383 Fixed Capacity Management: Not Supported 00:20:15.383 Variable Capacity Management: Not Supported 00:20:15.383 Delete Endurance Group: Not Supported 00:20:15.383 Delete NVM Set: Not Supported 00:20:15.383 Extended LBA Formats Supported: Not Supported 00:20:15.383 Flexible Data Placement Supported: Not Supported 00:20:15.383 00:20:15.383 Controller Memory Buffer Support 00:20:15.383 ================================ 00:20:15.383 Supported: No 00:20:15.383 00:20:15.383 Persistent Memory Region Support 00:20:15.383 ================================ 00:20:15.383 Supported: No 00:20:15.383 00:20:15.383 Admin Command Set Attributes 00:20:15.383 ============================ 00:20:15.383 Security Send/Receive: Not Supported 00:20:15.383 Format NVM: Not Supported 00:20:15.383 Firmware Activate/Download: Not Supported 00:20:15.383 Namespace Management: Not Supported 00:20:15.383 Device Self-Test: Not Supported 00:20:15.383 Directives: Not Supported 00:20:15.383 NVMe-MI: Not Supported 00:20:15.383 Virtualization Management: Not Supported 00:20:15.383 Doorbell Buffer Config: Not Supported 00:20:15.383 Get LBA Status Capability: Not Supported 00:20:15.383 Command & Feature Lockdown Capability: Not Supported 00:20:15.383 Abort Command Limit: 4 00:20:15.383 Async Event Request Limit: 4 00:20:15.383 Number of Firmware Slots: N/A 00:20:15.383 Firmware Slot 1 Read-Only: N/A 00:20:15.383 Firmware Activation Without Reset: N/A 00:20:15.383 Multiple Update Detection Support: N/A 00:20:15.383 Firmware Update Granularity: No Information Provided 00:20:15.383 Per-Namespace SMART Log: Yes 00:20:15.383 Asymmetric Namespace Access Log Page: Supported 00:20:15.383 ANA Transition Time : 10 sec 00:20:15.383 00:20:15.383 Asymmetric Namespace Access Capabilities 00:20:15.383 ANA Optimized State : Supported 00:20:15.383 ANA Non-Optimized State : Supported 00:20:15.383 ANA Inaccessible State : Supported 00:20:15.383 ANA Persistent Loss State : Supported 00:20:15.383 ANA Change State : Supported 00:20:15.383 ANAGRPID is not changed : No 00:20:15.383 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:15.383 00:20:15.383 ANA Group Identifier Maximum : 128 00:20:15.383 Number of ANA Group Identifiers : 128 00:20:15.383 Max Number of Allowed Namespaces : 1024 00:20:15.383 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:15.383 Command Effects Log Page: Supported 00:20:15.383 Get Log Page Extended Data: Supported 00:20:15.383 Telemetry Log Pages: Not Supported 00:20:15.383 Persistent Event Log Pages: Not Supported 00:20:15.383 Supported Log Pages Log Page: May Support 00:20:15.383 Commands Supported & Effects Log Page: Not Supported 00:20:15.383 Feature Identifiers & Effects Log Page:May Support 00:20:15.383 NVMe-MI Commands & Effects Log Page: May Support 00:20:15.383 Data Area 4 for Telemetry Log: Not Supported 00:20:15.383 Error Log Page Entries Supported: 128 00:20:15.383 Keep Alive: Supported 00:20:15.383 Keep Alive Granularity: 1000 ms 00:20:15.383 00:20:15.383 NVM Command Set Attributes 00:20:15.383 ========================== 00:20:15.383 Submission Queue Entry Size 00:20:15.383 Max: 64 00:20:15.383 Min: 64 00:20:15.383 Completion Queue Entry Size 00:20:15.383 Max: 16 00:20:15.383 Min: 16 00:20:15.383 Number of Namespaces: 1024 00:20:15.383 Compare Command: Not Supported 00:20:15.383 Write Uncorrectable Command: Not Supported 00:20:15.383 Dataset Management Command: Supported 00:20:15.383 Write Zeroes Command: Supported 00:20:15.383 Set Features Save Field: Not Supported 00:20:15.383 Reservations: Not Supported 00:20:15.383 Timestamp: Not Supported 00:20:15.384 Copy: Not Supported 00:20:15.384 Volatile Write Cache: Present 00:20:15.384 Atomic Write Unit (Normal): 1 00:20:15.384 Atomic Write Unit (PFail): 1 00:20:15.384 Atomic Compare & Write Unit: 1 00:20:15.384 Fused Compare & Write: Not Supported 00:20:15.384 Scatter-Gather List 00:20:15.384 SGL Command Set: Supported 00:20:15.384 SGL Keyed: Not Supported 00:20:15.384 SGL Bit Bucket Descriptor: Not Supported 00:20:15.384 SGL Metadata Pointer: Not Supported 00:20:15.384 Oversized SGL: Not Supported 00:20:15.384 SGL Metadata Address: Not Supported 00:20:15.384 SGL Offset: Supported 00:20:15.384 Transport SGL Data Block: Not Supported 00:20:15.384 Replay Protected Memory Block: Not Supported 00:20:15.384 00:20:15.384 Firmware Slot Information 00:20:15.384 ========================= 00:20:15.384 Active slot: 0 00:20:15.384 00:20:15.384 Asymmetric Namespace Access 00:20:15.384 =========================== 00:20:15.384 Change Count : 0 00:20:15.384 Number of ANA Group Descriptors : 1 00:20:15.384 ANA Group Descriptor : 0 00:20:15.384 ANA Group ID : 1 00:20:15.384 Number of NSID Values : 1 00:20:15.384 Change Count : 0 00:20:15.384 ANA State : 1 00:20:15.384 Namespace Identifier : 1 00:20:15.384 00:20:15.384 Commands Supported and Effects 00:20:15.384 ============================== 00:20:15.384 Admin Commands 00:20:15.384 -------------- 00:20:15.384 Get Log Page (02h): Supported 00:20:15.384 Identify (06h): Supported 00:20:15.384 Abort (08h): Supported 00:20:15.384 Set Features (09h): Supported 00:20:15.384 Get Features (0Ah): Supported 00:20:15.384 Asynchronous Event Request (0Ch): Supported 00:20:15.384 Keep Alive (18h): Supported 00:20:15.384 I/O Commands 00:20:15.384 ------------ 00:20:15.384 Flush (00h): Supported 00:20:15.384 Write (01h): Supported LBA-Change 00:20:15.384 Read (02h): Supported 00:20:15.384 Write Zeroes (08h): Supported LBA-Change 00:20:15.384 Dataset Management (09h): Supported 00:20:15.384 00:20:15.384 Error Log 00:20:15.384 ========= 00:20:15.384 Entry: 0 00:20:15.384 Error Count: 0x3 00:20:15.384 Submission Queue Id: 0x0 00:20:15.384 Command Id: 0x5 00:20:15.384 Phase Bit: 0 00:20:15.384 Status Code: 0x2 00:20:15.384 Status Code Type: 0x0 00:20:15.384 Do Not Retry: 1 00:20:15.384 Error Location: 0x28 00:20:15.384 LBA: 0x0 00:20:15.384 Namespace: 0x0 00:20:15.384 Vendor Log Page: 0x0 00:20:15.384 ----------- 00:20:15.384 Entry: 1 00:20:15.384 Error Count: 0x2 00:20:15.384 Submission Queue Id: 0x0 00:20:15.384 Command Id: 0x5 00:20:15.384 Phase Bit: 0 00:20:15.384 Status Code: 0x2 00:20:15.384 Status Code Type: 0x0 00:20:15.384 Do Not Retry: 1 00:20:15.384 Error Location: 0x28 00:20:15.384 LBA: 0x0 00:20:15.384 Namespace: 0x0 00:20:15.384 Vendor Log Page: 0x0 00:20:15.384 ----------- 00:20:15.384 Entry: 2 00:20:15.384 Error Count: 0x1 00:20:15.384 Submission Queue Id: 0x0 00:20:15.384 Command Id: 0x4 00:20:15.384 Phase Bit: 0 00:20:15.384 Status Code: 0x2 00:20:15.384 Status Code Type: 0x0 00:20:15.384 Do Not Retry: 1 00:20:15.384 Error Location: 0x28 00:20:15.384 LBA: 0x0 00:20:15.384 Namespace: 0x0 00:20:15.384 Vendor Log Page: 0x0 00:20:15.384 00:20:15.384 Number of Queues 00:20:15.384 ================ 00:20:15.384 Number of I/O Submission Queues: 128 00:20:15.384 Number of I/O Completion Queues: 128 00:20:15.384 00:20:15.384 ZNS Specific Controller Data 00:20:15.384 ============================ 00:20:15.384 Zone Append Size Limit: 0 00:20:15.384 00:20:15.384 00:20:15.384 Active Namespaces 00:20:15.384 ================= 00:20:15.384 get_feature(0x05) failed 00:20:15.384 Namespace ID:1 00:20:15.384 Command Set Identifier: NVM (00h) 00:20:15.384 Deallocate: Supported 00:20:15.384 Deallocated/Unwritten Error: Not Supported 00:20:15.384 Deallocated Read Value: Unknown 00:20:15.384 Deallocate in Write Zeroes: Not Supported 00:20:15.384 Deallocated Guard Field: 0xFFFF 00:20:15.384 Flush: Supported 00:20:15.384 Reservation: Not Supported 00:20:15.384 Namespace Sharing Capabilities: Multiple Controllers 00:20:15.384 Size (in LBAs): 1310720 (5GiB) 00:20:15.384 Capacity (in LBAs): 1310720 (5GiB) 00:20:15.384 Utilization (in LBAs): 1310720 (5GiB) 00:20:15.384 UUID: 4ebf41b8-921b-4340-8ce9-f1dd0a42aba3 00:20:15.384 Thin Provisioning: Not Supported 00:20:15.384 Per-NS Atomic Units: Yes 00:20:15.384 Atomic Boundary Size (Normal): 0 00:20:15.384 Atomic Boundary Size (PFail): 0 00:20:15.384 Atomic Boundary Offset: 0 00:20:15.384 NGUID/EUI64 Never Reused: No 00:20:15.384 ANA group ID: 1 00:20:15.384 Namespace Write Protected: No 00:20:15.384 Number of LBA Formats: 1 00:20:15.384 Current LBA Format: LBA Format #00 00:20:15.384 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:15.384 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.384 rmmod nvme_tcp 00:20:15.384 rmmod nvme_fabrics 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:15.384 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:15.385 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:15.385 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:15.644 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.644 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:15.644 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:15.644 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:15.644 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:15.644 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:15.644 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:15.644 15:28:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:15.644 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:15.902 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:15.902 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:15.902 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:15.902 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:15.902 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:15.902 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:15.902 15:28:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:16.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:16.891 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.891 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.891 ************************************ 00:20:16.891 END TEST nvmf_identify_kernel_target 00:20:16.891 ************************************ 00:20:16.891 00:20:16.891 real 0m4.018s 00:20:16.891 user 0m1.319s 00:20:16.891 sys 0m2.064s 00:20:16.891 15:28:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.891 15:28:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.891 15:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:16.891 15:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:16.891 15:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.891 15:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.891 ************************************ 00:20:16.891 START TEST nvmf_auth_host 00:20:16.891 ************************************ 00:20:16.891 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:17.166 * Looking for test storage... 00:20:17.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:17.166 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:17.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.167 --rc genhtml_branch_coverage=1 00:20:17.167 --rc genhtml_function_coverage=1 00:20:17.167 --rc genhtml_legend=1 00:20:17.167 --rc geninfo_all_blocks=1 00:20:17.167 --rc geninfo_unexecuted_blocks=1 00:20:17.167 00:20:17.167 ' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:17.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.167 --rc genhtml_branch_coverage=1 00:20:17.167 --rc genhtml_function_coverage=1 00:20:17.167 --rc genhtml_legend=1 00:20:17.167 --rc geninfo_all_blocks=1 00:20:17.167 --rc geninfo_unexecuted_blocks=1 00:20:17.167 00:20:17.167 ' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:17.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.167 --rc genhtml_branch_coverage=1 00:20:17.167 --rc genhtml_function_coverage=1 00:20:17.167 --rc genhtml_legend=1 00:20:17.167 --rc geninfo_all_blocks=1 00:20:17.167 --rc geninfo_unexecuted_blocks=1 00:20:17.167 00:20:17.167 ' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:17.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.167 --rc genhtml_branch_coverage=1 00:20:17.167 --rc genhtml_function_coverage=1 00:20:17.167 --rc genhtml_legend=1 00:20:17.167 --rc geninfo_all_blocks=1 00:20:17.167 --rc geninfo_unexecuted_blocks=1 00:20:17.167 00:20:17.167 ' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.167 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:17.167 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:17.427 Cannot find device "nvmf_init_br" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:17.427 Cannot find device "nvmf_init_br2" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:17.427 Cannot find device "nvmf_tgt_br" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:17.427 Cannot find device "nvmf_tgt_br2" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:17.427 Cannot find device "nvmf_init_br" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:17.427 Cannot find device "nvmf_init_br2" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:17.427 Cannot find device "nvmf_tgt_br" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:17.427 Cannot find device "nvmf_tgt_br2" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:17.427 Cannot find device "nvmf_br" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:17.427 Cannot find device "nvmf_init_if" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:17.427 Cannot find device "nvmf_init_if2" 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:17.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:17.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:17.427 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:17.428 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.687 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.687 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.687 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.687 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.687 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:17.687 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:17.687 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:17.687 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:17.688 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:17.688 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:17.688 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:17.688 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:17.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:20:17.688 00:20:17.688 --- 10.0.0.3 ping statistics --- 00:20:17.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.688 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:17.688 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:17.688 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:20:17.688 00:20:17.688 --- 10.0.0.4 ping statistics --- 00:20:17.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.688 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:17.688 00:20:17.688 --- 10.0.0.1 ping statistics --- 00:20:17.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.688 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:17.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:20:17.688 00:20:17.688 --- 10.0.0.2 ping statistics --- 00:20:17.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.688 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.688 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=93111 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 93111 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 93111 ']' 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.948 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79a09efef97c6db7a948c37c34124554 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lcb 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79a09efef97c6db7a948c37c34124554 0 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79a09efef97c6db7a948c37c34124554 0 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79a09efef97c6db7a948c37c34124554 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lcb 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lcb 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lcb 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d99329217dc9c0f09129ebdd907b8fa554f82885148aef3d02eca734c257fc95 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Kpk 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d99329217dc9c0f09129ebdd907b8fa554f82885148aef3d02eca734c257fc95 3 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d99329217dc9c0f09129ebdd907b8fa554f82885148aef3d02eca734c257fc95 3 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d99329217dc9c0f09129ebdd907b8fa554f82885148aef3d02eca734c257fc95 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Kpk 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Kpk 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Kpk 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=09dc54ecef6a8486e86a85df141450028c6d592f5ee81422 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0dO 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 09dc54ecef6a8486e86a85df141450028c6d592f5ee81422 0 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 09dc54ecef6a8486e86a85df141450028c6d592f5ee81422 0 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=09dc54ecef6a8486e86a85df141450028c6d592f5ee81422 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:18.888 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0dO 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0dO 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.0dO 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e768069000d678ebf780613cb113a45bb51724b3b3e3429a 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.R0D 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e768069000d678ebf780613cb113a45bb51724b3b3e3429a 2 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e768069000d678ebf780613cb113a45bb51724b3b3e3429a 2 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e768069000d678ebf780613cb113a45bb51724b3b3e3429a 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.R0D 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.R0D 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.R0D 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=75936e331cc1885761463dca59ac73f1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Kcd 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 75936e331cc1885761463dca59ac73f1 1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 75936e331cc1885761463dca59ac73f1 1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=75936e331cc1885761463dca59ac73f1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Kcd 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Kcd 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Kcd 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7c090313da6b3962ef92eb5fb32d8e06 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VOY 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7c090313da6b3962ef92eb5fb32d8e06 1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7c090313da6b3962ef92eb5fb32d8e06 1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7c090313da6b3962ef92eb5fb32d8e06 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VOY 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VOY 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VOY 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:19.148 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8fa4da1c60c1dd0dce41c46d3c57ae8c4e6dde1b5a5e7bf3 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ojg 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8fa4da1c60c1dd0dce41c46d3c57ae8c4e6dde1b5a5e7bf3 2 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8fa4da1c60c1dd0dce41c46d3c57ae8c4e6dde1b5a5e7bf3 2 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8fa4da1c60c1dd0dce41c46d3c57ae8c4e6dde1b5a5e7bf3 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ojg 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ojg 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ojg 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=12cd2b02c5b4d3e48d6f3644254fef87 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hyG 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 12cd2b02c5b4d3e48d6f3644254fef87 0 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 12cd2b02c5b4d3e48d6f3644254fef87 0 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:19.408 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=12cd2b02c5b4d3e48d6f3644254fef87 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hyG 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hyG 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hyG 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89561d0dd0fefc64df1d1e8e8800a48846622b89ea810f2bce888b8474156254 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6gL 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89561d0dd0fefc64df1d1e8e8800a48846622b89ea810f2bce888b8474156254 3 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89561d0dd0fefc64df1d1e8e8800a48846622b89ea810f2bce888b8474156254 3 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89561d0dd0fefc64df1d1e8e8800a48846622b89ea810f2bce888b8474156254 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6gL 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6gL 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6gL 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93111 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 93111 ']' 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.409 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lcb 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Kpk ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Kpk 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.0dO 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.R0D ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R0D 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Kcd 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VOY ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VOY 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ojg 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hyG ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hyG 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6gL 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:19.669 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:19.929 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:19.929 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:20.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:20.468 Waiting for block devices as requested 00:20:20.468 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.468 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:21.404 No valid GPT data, bailing 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:21.404 No valid GPT data, bailing 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:21.404 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:21.405 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:21.664 No valid GPT data, bailing 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:21.664 No valid GPT data, bailing 00:20:21.664 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:20:21.664 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -a 10.0.0.1 -t tcp -s 4420 00:20:21.665 00:20:21.665 Discovery Log Number of Records 2, Generation counter 2 00:20:21.665 =====Discovery Log Entry 0====== 00:20:21.665 trtype: tcp 00:20:21.665 adrfam: ipv4 00:20:21.665 subtype: current discovery subsystem 00:20:21.665 treq: not specified, sq flow control disable supported 00:20:21.665 portid: 1 00:20:21.665 trsvcid: 4420 00:20:21.665 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:21.665 traddr: 10.0.0.1 00:20:21.665 eflags: none 00:20:21.665 sectype: none 00:20:21.665 =====Discovery Log Entry 1====== 00:20:21.665 trtype: tcp 00:20:21.665 adrfam: ipv4 00:20:21.665 subtype: nvme subsystem 00:20:21.665 treq: not specified, sq flow control disable supported 00:20:21.665 portid: 1 00:20:21.665 trsvcid: 4420 00:20:21.665 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:21.665 traddr: 10.0.0.1 00:20:21.665 eflags: none 00:20:21.665 sectype: none 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:21.665 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.924 nvme0n1 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:21.924 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.925 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.185 nvme0n1 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.185 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 nvme0n1 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 nvme0n1 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:22.445 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:22.446 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:22.446 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.446 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.446 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:22.446 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:22.446 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:22.446 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.705 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.705 nvme0n1 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.705 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.706 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.965 nvme0n1 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.965 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.224 nvme0n1 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.224 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.483 nvme0n1 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.483 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.484 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.484 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.484 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.484 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.484 15:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.744 nvme0n1 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.744 nvme0n1 00:20:23.744 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.004 nvme0n1 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.004 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.573 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.833 nvme0n1 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.833 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.092 nvme0n1 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.092 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.093 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.401 nvme0n1 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.401 nvme0n1 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.401 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.672 15:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.672 nvme0n1 00:20:25.672 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.672 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.672 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.672 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.672 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.672 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.932 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:27.315 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:27.315 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.316 nvme0n1 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.316 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.577 15:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.837 nvme0n1 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.837 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.097 nvme0n1 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.097 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.356 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.616 nvme0n1 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.616 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.617 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.876 nvme0n1 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.876 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.877 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.446 nvme0n1 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.446 15:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 nvme0n1 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.015 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.584 nvme0n1 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:30.584 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:30.584 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.584 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.153 nvme0n1 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.153 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.154 15:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 nvme0n1 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 nvme0n1 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.724 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.725 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.725 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:31.985 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.986 nvme0n1 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.986 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.247 nvme0n1 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.247 nvme0n1 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.247 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.508 nvme0n1 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:32.508 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.509 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.768 nvme0n1 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:32.768 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.769 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.028 nvme0n1 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.028 nvme0n1 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:33.028 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.029 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.288 nvme0n1 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:33.288 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.289 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 nvme0n1 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:33.548 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.549 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.873 nvme0n1 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:33.873 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.874 nvme0n1 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.874 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.133 nvme0n1 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.392 nvme0n1 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:34.392 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.393 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.652 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.652 nvme0n1 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.652 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.653 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.911 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.171 nvme0n1 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:35.171 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.172 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.430 nvme0n1 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.430 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.431 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.690 nvme0n1 00:20:35.690 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.690 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.690 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.690 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.690 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.948 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.948 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.949 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.209 nvme0n1 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.209 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.469 nvme0n1 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.469 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.729 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.988 nvme0n1 00:20:36.988 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.988 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.988 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.988 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.988 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.988 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.247 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.248 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.507 nvme0n1 00:20:37.766 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.766 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.766 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.766 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.766 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:37.766 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:37.767 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.767 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.767 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.335 nvme0n1 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:38.335 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.336 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.595 nvme0n1 00:20:38.595 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.862 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.862 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.862 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.862 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.862 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.862 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.863 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.460 nvme0n1 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:39.460 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.461 nvme0n1 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.461 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.721 nvme0n1 00:20:39.721 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.721 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.721 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.721 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.721 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.721 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.721 nvme0n1 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.721 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.981 nvme0n1 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.981 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.982 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.242 nvme0n1 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.242 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.243 nvme0n1 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.243 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:40.502 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.503 nvme0n1 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.503 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.763 nvme0n1 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.763 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.023 nvme0n1 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:41.023 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.024 nvme0n1 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.024 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.283 nvme0n1 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:41.283 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.284 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:41.284 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:41.284 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:41.284 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.284 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.284 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.284 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.542 nvme0n1 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:41.542 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.542 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.801 nvme0n1 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.801 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.060 nvme0n1 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.060 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.319 nvme0n1 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:42.319 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.320 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.578 nvme0n1 00:20:42.578 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.578 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.578 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.578 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.578 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.836 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.094 nvme0n1 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:43.094 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.095 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.353 nvme0n1 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:43.353 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.354 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.611 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.943 nvme0n1 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.944 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.203 nvme0n1 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzlhMDllZmVmOTdjNmRiN2E5NDhjMzdjMzQxMjQ1NTRZWERn: 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDk5MzI5MjE3ZGM5YzBmMDkxMjllYmRkOTA3YjhmYTU1NGY4Mjg4NTE0OGFlZjNkMDJlY2E3MzRjMjU3ZmM5NdjC+nw=: 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.203 15:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.771 nvme0n1 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.771 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.338 nvme0n1 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.338 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.905 nvme0n1 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZhNGRhMWM2MGMxZGQwZGNlNDFjNDZkM2M1N2FlOGM0ZTZkZGUxYjVhNWU3YmYzq77DwA==: 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: ]] 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjZDJiMDJjNWI0ZDNlNDhkNmYzNjQ0MjU0ZmVmODe764aY: 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.905 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.906 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.481 nvme0n1 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.481 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1NjFkMGRkMGZlZmM2NGRmMWQxZThlODgwMGE0ODg0NjYyMmI4OWVhODEwZjJiY2U4ODhiODQ3NDE1NjI1NH7+ivY=: 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.482 15:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.052 nvme0n1 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:47.052 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.053 2024/11/20 15:28:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:47.053 request: 00:20:47.053 { 00:20:47.053 "method": "bdev_nvme_attach_controller", 00:20:47.053 "params": { 00:20:47.053 "name": "nvme0", 00:20:47.053 "trtype": "tcp", 00:20:47.053 "traddr": "10.0.0.1", 00:20:47.053 "adrfam": "ipv4", 00:20:47.053 "trsvcid": "4420", 00:20:47.053 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:47.053 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:47.053 "prchk_reftag": false, 00:20:47.053 "prchk_guard": false, 00:20:47.053 "hdgst": false, 00:20:47.053 "ddgst": false, 00:20:47.053 "allow_unrecognized_csi": false 00:20:47.053 } 00:20:47.053 } 00:20:47.053 Got JSON-RPC error response 00:20:47.053 GoRPCClient: error on JSON-RPC call 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.053 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.053 2024/11/20 15:28:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:47.053 request: 00:20:47.053 { 00:20:47.053 "method": "bdev_nvme_attach_controller", 00:20:47.053 "params": { 00:20:47.053 "name": "nvme0", 00:20:47.053 "trtype": "tcp", 00:20:47.053 "traddr": "10.0.0.1", 00:20:47.053 "adrfam": "ipv4", 00:20:47.053 "trsvcid": "4420", 00:20:47.053 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:47.053 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:47.053 "prchk_reftag": false, 00:20:47.053 "prchk_guard": false, 00:20:47.053 "hdgst": false, 00:20:47.053 "ddgst": false, 00:20:47.053 "dhchap_key": "key2", 00:20:47.053 "allow_unrecognized_csi": false 00:20:47.053 } 00:20:47.054 } 00:20:47.054 Got JSON-RPC error response 00:20:47.054 GoRPCClient: error on JSON-RPC call 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.054 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.315 2024/11/20 15:28:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:47.315 request: 00:20:47.315 { 00:20:47.315 "method": "bdev_nvme_attach_controller", 00:20:47.315 "params": { 00:20:47.315 "name": "nvme0", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.1", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:47.315 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:47.315 "prchk_reftag": false, 00:20:47.315 "prchk_guard": false, 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false, 00:20:47.315 "dhchap_key": "key1", 00:20:47.315 "dhchap_ctrlr_key": "ckey2", 00:20:47.315 "allow_unrecognized_csi": false 00:20:47.315 } 00:20:47.315 } 00:20:47.315 Got JSON-RPC error response 00:20:47.315 GoRPCClient: error on JSON-RPC call 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.315 nvme0n1 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.315 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.316 2024/11/20 15:28:33 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:47.316 request: 00:20:47.316 { 00:20:47.316 "method": "bdev_nvme_set_keys", 00:20:47.316 "params": { 00:20:47.316 "name": "nvme0", 00:20:47.316 "dhchap_key": "key1", 00:20:47.316 "dhchap_ctrlr_key": "ckey2" 00:20:47.316 } 00:20:47.316 } 00:20:47.316 Got JSON-RPC error response 00:20:47.316 GoRPCClient: error on JSON-RPC call 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.316 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.575 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.575 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:47.575 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.575 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.575 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.575 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:47.575 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlkYzU0ZWNlZjZhODQ4NmU4NmE4NWRmMTQxNDUwMDI4YzZkNTkyZjVlZTgxNDIyoAOldQ==: 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: ]] 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc2ODA2OTAwMGQ2NzhlYmY3ODA2MTNjYjExM2E0NWJiNTE3MjRiM2IzZTM0MjlhnlqhIw==: 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.513 nvme0n1 00:20:48.513 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.514 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzU5MzZlMzMxY2MxODg1NzYxNDYzZGNhNTlhYzczZjEJtdJh: 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: ]] 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2MwOTAzMTNkYTZiMzk2MmVmOTJlYjVmYjMyZDhlMDbwKWQ4: 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:48.773 15:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.773 2024/11/20 15:28:35 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:48.773 request: 00:20:48.773 { 00:20:48.773 "method": "bdev_nvme_set_keys", 00:20:48.773 "params": { 00:20:48.773 "name": "nvme0", 00:20:48.773 "dhchap_key": "key2", 00:20:48.773 "dhchap_ctrlr_key": "ckey1" 00:20:48.773 } 00:20:48.773 } 00:20:48.773 Got JSON-RPC error response 00:20:48.773 GoRPCClient: error on JSON-RPC call 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:48.773 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.713 rmmod nvme_tcp 00:20:49.713 rmmod nvme_fabrics 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 93111 ']' 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 93111 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 93111 ']' 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 93111 00:20:49.713 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:49.972 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.972 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93111 00:20:49.972 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.972 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.972 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93111' 00:20:49.972 killing process with pid 93111 00:20:49.972 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 93111 00:20:49.972 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 93111 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:50.232 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:50.491 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:51.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:51.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:51.429 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:51.687 15:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lcb /tmp/spdk.key-null.0dO /tmp/spdk.key-sha256.Kcd /tmp/spdk.key-sha384.ojg /tmp/spdk.key-sha512.6gL /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:51.687 15:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:52.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:52.261 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:52.261 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:52.261 00:20:52.261 real 0m35.197s 00:20:52.261 user 0m32.280s 00:20:52.261 sys 0m5.425s 00:20:52.261 ************************************ 00:20:52.261 END TEST nvmf_auth_host 00:20:52.261 ************************************ 00:20:52.261 15:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.261 15:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.261 15:28:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:52.261 15:28:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:52.261 15:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:52.261 15:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.261 15:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.261 ************************************ 00:20:52.261 START TEST nvmf_digest 00:20:52.261 ************************************ 00:20:52.261 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:52.261 * Looking for test storage... 00:20:52.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.526 --rc genhtml_branch_coverage=1 00:20:52.526 --rc genhtml_function_coverage=1 00:20:52.526 --rc genhtml_legend=1 00:20:52.526 --rc geninfo_all_blocks=1 00:20:52.526 --rc geninfo_unexecuted_blocks=1 00:20:52.526 00:20:52.526 ' 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.526 --rc genhtml_branch_coverage=1 00:20:52.526 --rc genhtml_function_coverage=1 00:20:52.526 --rc genhtml_legend=1 00:20:52.526 --rc geninfo_all_blocks=1 00:20:52.526 --rc geninfo_unexecuted_blocks=1 00:20:52.526 00:20:52.526 ' 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.526 --rc genhtml_branch_coverage=1 00:20:52.526 --rc genhtml_function_coverage=1 00:20:52.526 --rc genhtml_legend=1 00:20:52.526 --rc geninfo_all_blocks=1 00:20:52.526 --rc geninfo_unexecuted_blocks=1 00:20:52.526 00:20:52.526 ' 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.526 --rc genhtml_branch_coverage=1 00:20:52.526 --rc genhtml_function_coverage=1 00:20:52.526 --rc genhtml_legend=1 00:20:52.526 --rc geninfo_all_blocks=1 00:20:52.526 --rc geninfo_unexecuted_blocks=1 00:20:52.526 00:20:52.526 ' 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.526 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.527 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:52.527 Cannot find device "nvmf_init_br" 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:52.527 Cannot find device "nvmf_init_br2" 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:52.527 Cannot find device "nvmf_tgt_br" 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:52.527 Cannot find device "nvmf_tgt_br2" 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:52.527 Cannot find device "nvmf_init_br" 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:52.527 15:28:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:52.788 Cannot find device "nvmf_init_br2" 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:52.788 Cannot find device "nvmf_tgt_br" 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:52.788 Cannot find device "nvmf_tgt_br2" 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:52.788 Cannot find device "nvmf_br" 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:52.788 Cannot find device "nvmf_init_if" 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:52.788 Cannot find device "nvmf_init_if2" 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:52.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:52.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:52.788 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:53.048 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:53.049 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:53.049 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.139 ms 00:20:53.049 00:20:53.049 --- 10.0.0.3 ping statistics --- 00:20:53.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.049 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:53.049 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:53.049 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:20:53.049 00:20:53.049 --- 10.0.0.4 ping statistics --- 00:20:53.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.049 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:53.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:53.049 00:20:53.049 --- 10.0.0.1 ping statistics --- 00:20:53.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.049 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:53.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:20:53.049 00:20:53.049 --- 10.0.0.2 ping statistics --- 00:20:53.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.049 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:53.049 ************************************ 00:20:53.049 START TEST nvmf_digest_clean 00:20:53.049 ************************************ 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94763 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94763 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94763 ']' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.049 15:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:53.310 [2024-11-20 15:28:39.529109] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:20:53.310 [2024-11-20 15:28:39.529177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.310 [2024-11-20 15:28:39.664383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.310 [2024-11-20 15:28:39.738419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.310 [2024-11-20 15:28:39.738472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.310 [2024-11-20 15:28:39.738482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.310 [2024-11-20 15:28:39.738491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.310 [2024-11-20 15:28:39.738515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.310 [2024-11-20 15:28:39.738910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:54.250 null0 00:20:54.250 [2024-11-20 15:28:40.658507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.250 [2024-11-20 15:28:40.682584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:54.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94819 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94819 /var/tmp/bperf.sock 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94819 ']' 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.250 15:28:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:54.510 [2024-11-20 15:28:40.740165] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:20:54.510 [2024-11-20 15:28:40.740409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94819 ] 00:20:54.510 [2024-11-20 15:28:40.898099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.510 [2024-11-20 15:28:40.962688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.449 15:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.449 15:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:55.449 15:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:55.449 15:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:55.449 15:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:55.709 15:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:55.710 15:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:55.969 nvme0n1 00:20:55.969 15:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:55.969 15:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:55.969 Running I/O for 2 seconds... 00:20:58.290 24591.00 IOPS, 96.06 MiB/s [2024-11-20T15:28:44.772Z] 24925.50 IOPS, 97.37 MiB/s 00:20:58.290 Latency(us) 00:20:58.290 [2024-11-20T15:28:44.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.290 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:58.290 nvme0n1 : 2.00 24949.56 97.46 0.00 0.00 5125.33 2487.21 16949.87 00:20:58.290 [2024-11-20T15:28:44.772Z] =================================================================================================================== 00:20:58.290 [2024-11-20T15:28:44.772Z] Total : 24949.56 97.46 0.00 0.00 5125.33 2487.21 16949.87 00:20:58.290 { 00:20:58.290 "results": [ 00:20:58.290 { 00:20:58.290 "job": "nvme0n1", 00:20:58.290 "core_mask": "0x2", 00:20:58.290 "workload": "randread", 00:20:58.290 "status": "finished", 00:20:58.290 "queue_depth": 128, 00:20:58.290 "io_size": 4096, 00:20:58.290 "runtime": 2.003202, 00:20:58.290 "iops": 24949.55576122628, 00:20:58.290 "mibps": 97.45920219229015, 00:20:58.290 "io_failed": 0, 00:20:58.290 "io_timeout": 0, 00:20:58.290 "avg_latency_us": 5125.3262638902725, 00:20:58.290 "min_latency_us": 2487.209638554217, 00:20:58.291 "max_latency_us": 16949.87309236948 00:20:58.291 } 00:20:58.291 ], 00:20:58.291 "core_count": 1 00:20:58.291 } 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:58.291 | select(.opcode=="crc32c") 00:20:58.291 | "\(.module_name) \(.executed)"' 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94819 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94819 ']' 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94819 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94819 00:20:58.291 killing process with pid 94819 00:20:58.291 Received shutdown signal, test time was about 2.000000 seconds 00:20:58.291 00:20:58.291 Latency(us) 00:20:58.291 [2024-11-20T15:28:44.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.291 [2024-11-20T15:28:44.773Z] =================================================================================================================== 00:20:58.291 [2024-11-20T15:28:44.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94819' 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94819 00:20:58.291 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94819 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94909 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94909 /var/tmp/bperf.sock 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94909 ']' 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:58.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.551 15:28:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 [2024-11-20 15:28:45.028321] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:20:58.551 [2024-11-20 15:28:45.028543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:20:58.551 Zero copy mechanism will not be used. 00:20:58.551 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94909 ] 00:20:58.811 [2024-11-20 15:28:45.181108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.811 [2024-11-20 15:28:45.245648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.749 15:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.749 15:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:59.749 15:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:59.749 15:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:59.749 15:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:00.009 15:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:00.009 15:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:00.269 nvme0n1 00:21:00.269 15:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:00.269 15:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:00.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:00.269 Zero copy mechanism will not be used. 00:21:00.269 Running I/O for 2 seconds... 00:21:02.158 9189.00 IOPS, 1148.62 MiB/s [2024-11-20T15:28:48.640Z] 9128.00 IOPS, 1141.00 MiB/s 00:21:02.158 Latency(us) 00:21:02.158 [2024-11-20T15:28:48.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.158 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:02.158 nvme0n1 : 2.00 9127.68 1140.96 0.00 0.00 1750.35 516.52 6843.12 00:21:02.158 [2024-11-20T15:28:48.640Z] =================================================================================================================== 00:21:02.158 [2024-11-20T15:28:48.640Z] Total : 9127.68 1140.96 0.00 0.00 1750.35 516.52 6843.12 00:21:02.158 { 00:21:02.158 "results": [ 00:21:02.158 { 00:21:02.158 "job": "nvme0n1", 00:21:02.158 "core_mask": "0x2", 00:21:02.158 "workload": "randread", 00:21:02.158 "status": "finished", 00:21:02.158 "queue_depth": 16, 00:21:02.158 "io_size": 131072, 00:21:02.158 "runtime": 2.001823, 00:21:02.158 "iops": 9127.68011957101, 00:21:02.158 "mibps": 1140.9600149463763, 00:21:02.158 "io_failed": 0, 00:21:02.158 "io_timeout": 0, 00:21:02.158 "avg_latency_us": 1750.3461411319533, 00:21:02.158 "min_latency_us": 516.5236947791165, 00:21:02.158 "max_latency_us": 6843.116465863453 00:21:02.158 } 00:21:02.158 ], 00:21:02.158 "core_count": 1 00:21:02.158 } 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:02.418 | select(.opcode=="crc32c") 00:21:02.418 | "\(.module_name) \(.executed)"' 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94909 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94909 ']' 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94909 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:02.418 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.679 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94909 00:21:02.679 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.679 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.679 killing process with pid 94909 00:21:02.679 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94909' 00:21:02.679 Received shutdown signal, test time was about 2.000000 seconds 00:21:02.679 00:21:02.679 Latency(us) 00:21:02.679 [2024-11-20T15:28:49.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.679 [2024-11-20T15:28:49.161Z] =================================================================================================================== 00:21:02.679 [2024-11-20T15:28:49.161Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.679 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94909 00:21:02.679 15:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94909 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94999 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94999 /var/tmp/bperf.sock 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94999 ']' 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.938 15:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:02.938 [2024-11-20 15:28:49.257818] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:02.938 [2024-11-20 15:28:49.257900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94999 ] 00:21:02.938 [2024-11-20 15:28:49.411894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.198 [2024-11-20 15:28:49.475516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.769 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.769 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:03.769 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:03.769 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:03.769 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:04.336 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:04.336 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:04.336 nvme0n1 00:21:04.595 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:04.595 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:04.595 Running I/O for 2 seconds... 00:21:06.470 29047.00 IOPS, 113.46 MiB/s [2024-11-20T15:28:52.952Z] 29050.00 IOPS, 113.48 MiB/s 00:21:06.470 Latency(us) 00:21:06.470 [2024-11-20T15:28:52.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.470 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:06.470 nvme0n1 : 2.01 29045.36 113.46 0.00 0.00 4402.72 1934.50 11738.58 00:21:06.470 [2024-11-20T15:28:52.952Z] =================================================================================================================== 00:21:06.470 [2024-11-20T15:28:52.952Z] Total : 29045.36 113.46 0.00 0.00 4402.72 1934.50 11738.58 00:21:06.470 { 00:21:06.470 "results": [ 00:21:06.470 { 00:21:06.470 "job": "nvme0n1", 00:21:06.470 "core_mask": "0x2", 00:21:06.470 "workload": "randwrite", 00:21:06.470 "status": "finished", 00:21:06.470 "queue_depth": 128, 00:21:06.470 "io_size": 4096, 00:21:06.470 "runtime": 2.005897, 00:21:06.470 "iops": 29045.3597567572, 00:21:06.470 "mibps": 113.45843654983281, 00:21:06.470 "io_failed": 0, 00:21:06.470 "io_timeout": 0, 00:21:06.470 "avg_latency_us": 4402.721924710961, 00:21:06.470 "min_latency_us": 1934.4963855421686, 00:21:06.470 "max_latency_us": 11738.576706827309 00:21:06.470 } 00:21:06.470 ], 00:21:06.470 "core_count": 1 00:21:06.470 } 00:21:06.470 15:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:06.470 15:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:06.470 15:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:06.729 15:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:06.729 | select(.opcode=="crc32c") 00:21:06.729 | "\(.module_name) \(.executed)"' 00:21:06.729 15:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94999 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94999 ']' 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94999 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.729 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94999 00:21:06.988 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:06.988 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:06.988 killing process with pid 94999 00:21:06.988 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94999' 00:21:06.988 Received shutdown signal, test time was about 2.000000 seconds 00:21:06.988 00:21:06.988 Latency(us) 00:21:06.988 [2024-11-20T15:28:53.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.988 [2024-11-20T15:28:53.470Z] =================================================================================================================== 00:21:06.988 [2024-11-20T15:28:53.470Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.988 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94999 00:21:06.988 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94999 00:21:07.247 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:07.247 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:07.247 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:07.247 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95090 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95090 /var/tmp/bperf.sock 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95090 ']' 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.248 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:07.248 [2024-11-20 15:28:53.532323] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:07.248 [2024-11-20 15:28:53.532405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95090 ] 00:21:07.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:07.248 Zero copy mechanism will not be used. 00:21:07.248 [2024-11-20 15:28:53.686790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.507 [2024-11-20 15:28:53.762052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.075 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.075 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:08.075 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:08.075 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:08.075 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:08.334 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:08.334 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:08.901 nvme0n1 00:21:08.901 15:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:08.901 15:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:08.901 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:08.901 Zero copy mechanism will not be used. 00:21:08.901 Running I/O for 2 seconds... 00:21:10.775 7675.00 IOPS, 959.38 MiB/s [2024-11-20T15:28:57.257Z] 8072.50 IOPS, 1009.06 MiB/s 00:21:10.775 Latency(us) 00:21:10.775 [2024-11-20T15:28:57.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.775 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:10.775 nvme0n1 : 2.00 8070.38 1008.80 0.00 0.00 1979.09 1269.92 9053.97 00:21:10.775 [2024-11-20T15:28:57.257Z] =================================================================================================================== 00:21:10.775 [2024-11-20T15:28:57.257Z] Total : 8070.38 1008.80 0.00 0.00 1979.09 1269.92 9053.97 00:21:10.775 { 00:21:10.775 "results": [ 00:21:10.775 { 00:21:10.775 "job": "nvme0n1", 00:21:10.775 "core_mask": "0x2", 00:21:10.775 "workload": "randwrite", 00:21:10.775 "status": "finished", 00:21:10.775 "queue_depth": 16, 00:21:10.775 "io_size": 131072, 00:21:10.775 "runtime": 2.003003, 00:21:10.775 "iops": 8070.382320945101, 00:21:10.775 "mibps": 1008.7977901181376, 00:21:10.775 "io_failed": 0, 00:21:10.775 "io_timeout": 0, 00:21:10.775 "avg_latency_us": 1979.0910635676016, 00:21:10.775 "min_latency_us": 1269.9244979919679, 00:21:10.775 "max_latency_us": 9053.969477911647 00:21:10.775 } 00:21:10.775 ], 00:21:10.775 "core_count": 1 00:21:10.775 } 00:21:10.775 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:10.775 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:10.775 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:10.775 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:10.775 | select(.opcode=="crc32c") 00:21:10.775 | "\(.module_name) \(.executed)"' 00:21:10.775 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95090 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95090 ']' 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95090 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95090 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:11.034 killing process with pid 95090 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95090' 00:21:11.034 Received shutdown signal, test time was about 2.000000 seconds 00:21:11.034 00:21:11.034 Latency(us) 00:21:11.034 [2024-11-20T15:28:57.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.034 [2024-11-20T15:28:57.516Z] =================================================================================================================== 00:21:11.034 [2024-11-20T15:28:57.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95090 00:21:11.034 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95090 00:21:11.293 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94763 00:21:11.293 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94763 ']' 00:21:11.293 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94763 00:21:11.293 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:11.293 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.293 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94763 00:21:11.551 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.551 killing process with pid 94763 00:21:11.552 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.552 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94763' 00:21:11.552 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94763 00:21:11.552 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94763 00:21:11.810 00:21:11.810 real 0m18.604s 00:21:11.810 user 0m34.536s 00:21:11.810 sys 0m5.328s 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:11.810 ************************************ 00:21:11.810 END TEST nvmf_digest_clean 00:21:11.810 ************************************ 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:11.810 ************************************ 00:21:11.810 START TEST nvmf_digest_error 00:21:11.810 ************************************ 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=95203 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 95203 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95203 ']' 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.810 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.811 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.811 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.811 15:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:11.811 [2024-11-20 15:28:58.213868] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:11.811 [2024-11-20 15:28:58.213947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.069 [2024-11-20 15:28:58.364205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.069 [2024-11-20 15:28:58.444110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.069 [2024-11-20 15:28:58.444187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.069 [2024-11-20 15:28:58.444198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.069 [2024-11-20 15:28:58.444206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.069 [2024-11-20 15:28:58.444214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.069 [2024-11-20 15:28:58.444605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.636 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.636 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:12.636 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.636 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.636 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.896 [2024-11-20 15:28:59.164022] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.896 null0 00:21:12.896 [2024-11-20 15:28:59.320194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.896 [2024-11-20 15:28:59.344309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95253 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95253 /var/tmp/bperf.sock 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95253 ']' 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.896 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:13.156 [2024-11-20 15:28:59.407709] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:13.156 [2024-11-20 15:28:59.407791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95253 ] 00:21:13.156 [2024-11-20 15:28:59.541375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.156 [2024-11-20 15:28:59.612726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.089 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.348 nvme0n1 00:21:14.608 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:14.608 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.608 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:14.608 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.608 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:14.608 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:14.608 Running I/O for 2 seconds... 00:21:14.608 [2024-11-20 15:29:00.973322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:00.973367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:00.973396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:00.983546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:00.983580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:00.983592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:00.992539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:00.992571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:00.992597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:01.002867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:01.002900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:01.002911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:01.012899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:01.012928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:01.012938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:01.023316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:01.023348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:01.023359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:01.033560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:01.033592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:01.033603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:01.043766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:01.043796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:01.043807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:01.055901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.608 [2024-11-20 15:29:01.055930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.608 [2024-11-20 15:29:01.055941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.608 [2024-11-20 15:29:01.066227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.609 [2024-11-20 15:29:01.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.609 [2024-11-20 15:29:01.066269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.609 [2024-11-20 15:29:01.075856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.609 [2024-11-20 15:29:01.075887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.609 [2024-11-20 15:29:01.075898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.609 [2024-11-20 15:29:01.086622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.609 [2024-11-20 15:29:01.086652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.609 [2024-11-20 15:29:01.086663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.868 [2024-11-20 15:29:01.096490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.868 [2024-11-20 15:29:01.096519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.868 [2024-11-20 15:29:01.096530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.868 [2024-11-20 15:29:01.106997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.868 [2024-11-20 15:29:01.107030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.868 [2024-11-20 15:29:01.107040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.868 [2024-11-20 15:29:01.116960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.868 [2024-11-20 15:29:01.116991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.868 [2024-11-20 15:29:01.117001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.868 [2024-11-20 15:29:01.128257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.868 [2024-11-20 15:29:01.128288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.868 [2024-11-20 15:29:01.128299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.868 [2024-11-20 15:29:01.139129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.139159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.139170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.149722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.149752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.149763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.160253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.160282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.160293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.171234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.171264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.171275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.181913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.181953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.181963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.192616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.192647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.192658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.202002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.202032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.202043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.212262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.212291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.212301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.222590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.222620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.222630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.233314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.233344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.233354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.244345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.244375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.244385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.254822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.254853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.254864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.263941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.263971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.263982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.275117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.275149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.275159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.284784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.284812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.284824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.295117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.295145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.295156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.305975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.306005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.306016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.316648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.316687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.316699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.327326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.327356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.327367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.869 [2024-11-20 15:29:01.337591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:14.869 [2024-11-20 15:29:01.337621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.869 [2024-11-20 15:29:01.337631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.347848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.347887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.347897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.358218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.358246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.358256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.368563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.368593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.368603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.379208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.379238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.379249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.389052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.389081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.389091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.399714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.399744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.399755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.410198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.410227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.410237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.420439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.420470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.420480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.431144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.431175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.431185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.441438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.441469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.441480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.452297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.452328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.452338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.462543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.462574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.462585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.472845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.472875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.472885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.483144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.483175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.483186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.493437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.493468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.493477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.503673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.503713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.503725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.513737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.513764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.513774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.524412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.524441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.524451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.535383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.535415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.535426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.544738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.544769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.544779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.556828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.556858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.556869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.566055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.566086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.129 [2024-11-20 15:29:01.566096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.129 [2024-11-20 15:29:01.575855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.129 [2024-11-20 15:29:01.575884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.130 [2024-11-20 15:29:01.575894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.130 [2024-11-20 15:29:01.586419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.130 [2024-11-20 15:29:01.586448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.130 [2024-11-20 15:29:01.586458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.130 [2024-11-20 15:29:01.597018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.130 [2024-11-20 15:29:01.597048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.130 [2024-11-20 15:29:01.597058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.130 [2024-11-20 15:29:01.607615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.130 [2024-11-20 15:29:01.607647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.130 [2024-11-20 15:29:01.607658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.618249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.618281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.618291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.627287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.627320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.627331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.638014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.638042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.638053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.647537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.647567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.647578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.657311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.657341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.657351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.667985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.668015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.668026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.678668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.678704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.678715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.688862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.688893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.688904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.697397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.697428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.697439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.708694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.708724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.708735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.719049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.719079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.719090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.729360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.729391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.739939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.739970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.739980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.750579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.750611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.750621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.761508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.761537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.761548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.771967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.771998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.772008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.782192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.782221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.782232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.790902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.790933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.790944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.801308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.801339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.801351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.812067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.812097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.812107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.823057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.823086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.823097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.833853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.833883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.833893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.844248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.844278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.844289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.853412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.390 [2024-11-20 15:29:01.853443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.390 [2024-11-20 15:29:01.853453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.390 [2024-11-20 15:29:01.863935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.391 [2024-11-20 15:29:01.863966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.391 [2024-11-20 15:29:01.863977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.874655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.874694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.874705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.884894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.884935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.884945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.894880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.894908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.894919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.905629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.905660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.905670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.916660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.916700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.916728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.926858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.926889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.926900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.936868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.936898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.936908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.947055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.947086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.947096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 24472.00 IOPS, 95.59 MiB/s [2024-11-20T15:29:02.133Z] [2024-11-20 15:29:01.958400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.958423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.958433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.969156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.969185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.969194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.979311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.979341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.979352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.989576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.989605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.989615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:01.999936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:01.999967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:01.999976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.010671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.010708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.010719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.020242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.020272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.020283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.031613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.031644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.031654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.042034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.042066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.042076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.052424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.052455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.052466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.062750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.062780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.062790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.072850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.072878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.072888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.082989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.083018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.083028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.093454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.093483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.651 [2024-11-20 15:29:02.093494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.651 [2024-11-20 15:29:02.103549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.651 [2024-11-20 15:29:02.103579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.652 [2024-11-20 15:29:02.103589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.652 [2024-11-20 15:29:02.112576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.652 [2024-11-20 15:29:02.112606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.652 [2024-11-20 15:29:02.112617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.652 [2024-11-20 15:29:02.122713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.652 [2024-11-20 15:29:02.122742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.652 [2024-11-20 15:29:02.122753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.133529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.133560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.133569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.143568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.143598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.143608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.153724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.153754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.153764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.164624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.164655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.164665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.172901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.172930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.172940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.183324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.183355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.183366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.193618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.193649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.193659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.203719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.203749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.203760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.214491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.214522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.214533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.225251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.225282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.225292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.235024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.235054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.235065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.245326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.245355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.245366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.256525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.256556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.256567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.266767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.266798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.266808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.276985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.277022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.277032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.287272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.287302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.287313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.297620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.297651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.297662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.308329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.308361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.308373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.317525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.317556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.317567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.330058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.330088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.330100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.340400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.340429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.340439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.349437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.349468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.349479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.360434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.360466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.360477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.370581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.370612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.370623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.380753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.380782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.913 [2024-11-20 15:29:02.380792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.913 [2024-11-20 15:29:02.390972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:15.913 [2024-11-20 15:29:02.391000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.914 [2024-11-20 15:29:02.391010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.402060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.402090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.402100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.412447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.412477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.412487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.423109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.423138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.423148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.433649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.433690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.433717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.444128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.444160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.444170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.454646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.454685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.454713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.464694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.464739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.464750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.475028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.475057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.475067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.485221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.485249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.485259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.494275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.494313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.494340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.503957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.503987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.503997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.514353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.514384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.514395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.525048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.525077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.525088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.535884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.535915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.535925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.546743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.546772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.546784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.557123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.557151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.557161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.568101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.568132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.568142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.578311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.578355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.578382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.587925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.587955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.587966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.598382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.598412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.598423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.609243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.174 [2024-11-20 15:29:02.609273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.174 [2024-11-20 15:29:02.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.174 [2024-11-20 15:29:02.619845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.175 [2024-11-20 15:29:02.619874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.175 [2024-11-20 15:29:02.619884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.175 [2024-11-20 15:29:02.631229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.175 [2024-11-20 15:29:02.631258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.175 [2024-11-20 15:29:02.631268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.175 [2024-11-20 15:29:02.641517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.175 [2024-11-20 15:29:02.641549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.175 [2024-11-20 15:29:02.641559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.175 [2024-11-20 15:29:02.652496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.175 [2024-11-20 15:29:02.652527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.175 [2024-11-20 15:29:02.652538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.663267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.663297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.663308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.673720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.673764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.673775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.684030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.684059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.684069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.694285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.694320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.694346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.704855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.704883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.704893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.715766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.715795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.715805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.724567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.724598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.724608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.735204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.735234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.735245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.747200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.747229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.747239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.757531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.757560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.757570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.765298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.765328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.765338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.777115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.777145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.777155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.787649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.787688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.787716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.797862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.797889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.797899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.808204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.808233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.808244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.818868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.818898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.818909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.828038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.828068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.828079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.840116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.840146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.840157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.851053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.851083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.851094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.862025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.862063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.435 [2024-11-20 15:29:02.862073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.435 [2024-11-20 15:29:02.870752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.435 [2024-11-20 15:29:02.870782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.436 [2024-11-20 15:29:02.870792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.436 [2024-11-20 15:29:02.881470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.436 [2024-11-20 15:29:02.881502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.436 [2024-11-20 15:29:02.881513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.436 [2024-11-20 15:29:02.892399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.436 [2024-11-20 15:29:02.892430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.436 [2024-11-20 15:29:02.892440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.436 [2024-11-20 15:29:02.902867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.436 [2024-11-20 15:29:02.902897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.436 [2024-11-20 15:29:02.902908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.436 [2024-11-20 15:29:02.913891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.436 [2024-11-20 15:29:02.913920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.436 [2024-11-20 15:29:02.913930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.695 [2024-11-20 15:29:02.923884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.695 [2024-11-20 15:29:02.923913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.695 [2024-11-20 15:29:02.923924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.695 [2024-11-20 15:29:02.935264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.695 [2024-11-20 15:29:02.935293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.695 [2024-11-20 15:29:02.935304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.695 [2024-11-20 15:29:02.946423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.695 [2024-11-20 15:29:02.946451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.695 [2024-11-20 15:29:02.946462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.695 24477.50 IOPS, 95.62 MiB/s [2024-11-20T15:29:03.177Z] [2024-11-20 15:29:02.957838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe73130) 00:21:16.695 [2024-11-20 15:29:02.957866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.695 [2024-11-20 15:29:02.957877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.695 00:21:16.695 Latency(us) 00:21:16.695 [2024-11-20T15:29:03.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.695 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:16.695 nvme0n1 : 2.00 24490.27 95.67 0.00 0.00 5221.82 3053.08 13686.23 00:21:16.695 [2024-11-20T15:29:03.177Z] =================================================================================================================== 00:21:16.695 [2024-11-20T15:29:03.177Z] Total : 24490.27 95.67 0.00 0.00 5221.82 3053.08 13686.23 00:21:16.696 { 00:21:16.696 "results": [ 00:21:16.696 { 00:21:16.696 "job": "nvme0n1", 00:21:16.696 "core_mask": "0x2", 00:21:16.696 "workload": "randread", 00:21:16.696 "status": "finished", 00:21:16.696 "queue_depth": 128, 00:21:16.696 "io_size": 4096, 00:21:16.696 "runtime": 2.004184, 00:21:16.696 "iops": 24490.266362769085, 00:21:16.696 "mibps": 95.66510297956674, 00:21:16.696 "io_failed": 0, 00:21:16.696 "io_timeout": 0, 00:21:16.696 "avg_latency_us": 5221.819055616554, 00:21:16.696 "min_latency_us": 3053.0827309236947, 00:21:16.696 "max_latency_us": 13686.232931726907 00:21:16.696 } 00:21:16.696 ], 00:21:16.696 "core_count": 1 00:21:16.696 } 00:21:16.696 15:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:16.696 15:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:16.696 15:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:16.696 15:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:16.696 | .driver_specific 00:21:16.696 | .nvme_error 00:21:16.696 | .status_code 00:21:16.696 | .command_transient_transport_error' 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95253 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95253 ']' 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95253 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95253 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:16.955 killing process with pid 95253 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95253' 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95253 00:21:16.955 Received shutdown signal, test time was about 2.000000 seconds 00:21:16.955 00:21:16.955 Latency(us) 00:21:16.955 [2024-11-20T15:29:03.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.955 [2024-11-20T15:29:03.437Z] =================================================================================================================== 00:21:16.955 [2024-11-20T15:29:03.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.955 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95253 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95342 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95342 /var/tmp/bperf.sock 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95342 ']' 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.215 15:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:17.215 [2024-11-20 15:29:03.583992] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:17.215 [2024-11-20 15:29:03.584068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95342 ] 00:21:17.215 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:17.215 Zero copy mechanism will not be used. 00:21:17.474 [2024-11-20 15:29:03.739744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.474 [2024-11-20 15:29:03.802595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.042 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.042 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:18.042 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.042 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.300 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:18.300 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.300 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:18.300 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.300 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:18.300 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:18.559 nvme0n1 00:21:18.559 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:18.559 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.559 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:18.559 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.559 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:18.559 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:18.820 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:18.820 Zero copy mechanism will not be used. 00:21:18.820 Running I/O for 2 seconds... 00:21:18.820 [2024-11-20 15:29:05.059715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.059765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.059779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.063485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.063519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.063531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.066534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.066571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.066583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.070620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.070654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.070681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.073932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.073963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.073974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.077595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.077627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.077637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.081440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.081473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.081483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.085184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.085217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.085227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.087745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.087776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.087786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.090876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.820 [2024-11-20 15:29:05.090910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.820 [2024-11-20 15:29:05.090921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.820 [2024-11-20 15:29:05.093534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.093565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.093576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.096651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.096692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.096721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.099665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.099706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.099733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.102703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.102735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.102746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.106015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.106046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.106056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.109040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.109069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.109080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.112023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.112053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.112064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.114851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.114884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.114895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.118200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.118230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.118241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.120584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.120615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.120626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.123672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.123717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.123727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.126627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.126661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.126684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.129291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.129320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.129330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.132306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.132337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.132348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.135257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.135288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.135298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.138292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.138332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.138343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.141139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.141171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.141181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.144130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.144163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.144174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.147126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.147159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.147170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.150169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.150200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.150211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.153119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.153160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.156244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.156277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.156287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.159184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.159218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.159229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.162007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.162039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.162049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.165000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.165030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.165042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.168133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.168166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.168177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.171018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.821 [2024-11-20 15:29:05.171052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.821 [2024-11-20 15:29:05.171063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.821 [2024-11-20 15:29:05.174276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.174314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.174324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.177353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.177388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.177398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.179980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.180012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.180023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.182607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.182643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.182653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.185876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.185913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.185923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.188312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.188342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.188353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.191410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.191444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.191455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.194825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.194859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.194870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.197145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.197177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.197188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.200813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.200847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.200858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.204384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.204418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.204429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.207853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.207886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.207896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.211400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.211433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.211455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.214984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.215019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.215030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.218495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.218526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.218537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.222119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.222150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.222161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.225180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.225211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.225221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.227522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.227554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.227564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.231152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.231187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.231197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.235001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.235035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.235046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.238486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.238517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.238528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.240624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.240655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.240666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.244230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.244263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.244274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.246857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.246891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.246903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.249547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.249578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.249588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.253092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.253125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.253135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.256787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.256820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.822 [2024-11-20 15:29:05.256831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.822 [2024-11-20 15:29:05.260559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.822 [2024-11-20 15:29:05.260592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.260602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.263201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.263231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.263241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.266458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.266492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.266502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.270034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.270066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.270093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.272652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.272692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.272719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.275855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.275886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.275897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.279485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.279535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.279546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.283243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.283278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.283289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.286623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.286658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.286669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.289870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.289899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.289909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.291984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.292014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.292024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:18.823 [2024-11-20 15:29:05.295658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:18.823 [2024-11-20 15:29:05.295702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.823 [2024-11-20 15:29:05.295728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.299260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.299295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.299306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.302321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.302367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.302377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.305508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.305538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.305548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.309147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.309177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.309187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.312645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.312691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.312702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.316181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.316215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.316225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.319829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.319863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.319874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.323153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.323187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.323198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.326639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.326684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.326696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.330101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.330131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.330141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.333520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.333551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.333561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.337081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.337111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.337120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.340644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.340687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.340714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.344227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.344261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.344272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.347823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.347866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.347876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.351438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.351471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.351482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.354896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.354930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.354956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.358370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.358401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.358412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.361881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.361912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.361922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.365406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.365436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.365446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.368956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.369002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.369012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.372303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.086 [2024-11-20 15:29:05.372336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-11-20 15:29:05.372345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.086 [2024-11-20 15:29:05.375779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.375811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.375821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.379360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.379395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.379406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.382797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.382831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.382842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.385976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.386007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.386018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.389218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.389249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.389260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.391478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.391511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.391522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.395228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.395262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.395272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.399007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.399042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.399053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.402486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.402518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.402529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.404709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.404737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.404748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.408460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.408494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.408505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.412230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.412264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.412275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.415907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.415941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.415952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.418341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.418370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.418380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.421510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.421542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.421552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.425179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.425212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.425223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.428768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.428800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.428811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.431094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.431127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.431138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.434703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.434735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.434746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.437281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.437311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.437322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.440374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.440407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.440417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.444127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.444159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.444169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.447894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.447926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.447936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.451593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.451803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.451818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.455126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.455165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.455176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.458743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.458780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.458791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.462314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.462362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-11-20 15:29:05.462373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.087 [2024-11-20 15:29:05.465811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.087 [2024-11-20 15:29:05.465855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.465865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.469282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.469318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.469328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.472845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.472893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.472904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.476365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.476400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.476410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.479916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.479951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.479961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.482403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.482436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.482448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.485341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.485375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.485386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.488706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.488739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.488750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.491234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.491271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.491282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.494940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.495095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.495262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.498443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.498579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.498594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.502155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.502191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.502202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.505611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.505647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.505658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.508881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.508916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.508927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.512375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.512512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.512526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.515900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.515938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.515949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.519438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.519477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.519488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.522845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.522881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.522892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.526217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.526250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.526260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.529770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.529802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.529812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.533297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.533331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.533341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.536649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.536795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.536809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.540398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.540436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.540448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.543947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.544099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.544185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.547778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.547929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.548027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.551497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.551649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.551783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.555299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.555339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-11-20 15:29:05.555350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.088 [2024-11-20 15:29:05.558799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.088 [2024-11-20 15:29:05.558835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-11-20 15:29:05.558846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.089 [2024-11-20 15:29:05.562319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.089 [2024-11-20 15:29:05.562368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-11-20 15:29:05.562379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.565906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.565941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.565952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.569339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.569373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.569383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.572901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.572937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.572947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.576442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.576596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.576611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.580173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.580211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.580222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.583690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.583754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.583765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.587239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.587276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.587287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.590745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.590780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.590790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.594213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.594247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.594257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.597800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.597833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.597843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.601386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.601420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.601431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.604950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.604986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.604996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.608520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.608658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.608701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.612111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.612149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.612160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.615455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.615493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.615504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.619083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.619122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.619133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.622717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.622752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.622763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.626245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.626279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.626289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.629881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.629915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.372 [2024-11-20 15:29:05.629925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.372 [2024-11-20 15:29:05.633264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.372 [2024-11-20 15:29:05.633298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.633308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.636820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.636857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.636868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.640134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.640273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.640287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.643868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.643904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.643914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.647203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.647240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.647252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.650846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.650883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.650894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.654292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.654349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.654360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.657660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.657704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.657715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.661281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.661316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.661326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.664808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.664853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.664863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.668345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.668483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.668496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.671950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.671988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.671999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.675402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.675440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.675461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.678955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.678993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.679004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.682530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.682567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.682577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.685934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.685967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.685977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.689510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.689545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.689556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.693051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.693086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.693096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.696513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.696651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.696665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.700197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.700236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.700246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.703788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.703823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.703833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.707305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.707343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.707354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.710664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.710709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.710720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.714121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.714155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.714165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.717361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.717396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.717407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.720768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.720801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.720812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.724064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.724201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.724215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.373 [2024-11-20 15:29:05.727495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.373 [2024-11-20 15:29:05.727532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.373 [2024-11-20 15:29:05.727543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.730760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.730896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.730910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.733699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.733840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.733937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.737056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.737221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.737308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.740690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.740836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.740917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.743055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.743093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.743104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.746421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.746457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.746468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.749824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.749859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.749869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.752030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.752063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.752073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.755010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.755047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.755058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.758239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.758273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.758284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.760442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.760478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.763715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.763749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.763759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.766138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.766170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.766181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.769135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.769170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.769180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.772637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.772683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.772694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.776011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.776048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.776058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.779502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.779538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.779549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.782844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.782880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.782891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.786411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.786447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.786457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.789998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.790030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.790041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.793287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.793418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.793432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.796957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.796995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.797006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.800462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.800498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.800509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.803906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.803942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.803952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.807458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.807495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.807506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.811017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.811055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.811066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.374 [2024-11-20 15:29:05.814647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.374 [2024-11-20 15:29:05.814691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.374 [2024-11-20 15:29:05.814702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.818174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.818208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.818218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.821718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.821750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.821761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.825217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.825251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.825262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.828746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.828778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.828789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.832186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.832221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.832232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.835832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.835867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.835877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.839249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.839287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.839299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.842595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.842631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.842642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.846120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.846153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.846163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.375 [2024-11-20 15:29:05.849661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.375 [2024-11-20 15:29:05.849706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.375 [2024-11-20 15:29:05.849717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.635 [2024-11-20 15:29:05.853180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.635 [2024-11-20 15:29:05.853214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.635 [2024-11-20 15:29:05.853225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.635 [2024-11-20 15:29:05.856800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.635 [2024-11-20 15:29:05.856971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.635 [2024-11-20 15:29:05.857059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.635 [2024-11-20 15:29:05.860573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.860734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.860820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.864041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.864079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.864091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.867653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.867698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.867709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.871146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.871184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.871195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.874716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.874748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.874759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.878223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.878257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.878267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.881793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.881836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.881846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.885393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.885525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.885539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.888921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.888958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.888968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.892263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.892300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.892310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.895816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.895852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.895862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.899350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.899389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.899400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.902884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.902920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.902930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.906174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.906208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.906218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.909732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.909763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.909774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.913068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.913199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.913214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.916892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.916930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.916941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.919108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.919145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.919156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.922508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.922542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.922553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.925985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.926019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.926029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.929344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.929491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.929506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.932843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.932876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.932886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.936384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.936421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.936431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.939932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.939966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.939976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.943362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.943409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.943419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.946957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.946994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.636 [2024-11-20 15:29:05.947006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.636 [2024-11-20 15:29:05.950533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.636 [2024-11-20 15:29:05.950570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.950582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.954018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.954051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.954061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.956221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.956256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.956266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.959765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.959800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.959821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.963376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.963412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.963422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.965970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.966000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.966010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.969181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.969217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.969227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.972628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.972827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.972962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.975539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.975695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.975710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.978738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.978772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.978782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.982244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.982279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.982289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.985905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.986080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.986171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.989642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.989817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.989883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.993145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.993180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.993192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:05.996705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:05.996739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:05.996750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.000267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.000303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.000313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.003797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.003834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.003845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.007256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.007395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.007410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.010896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.010934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.010945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.014365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.014400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.014411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.017903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.017936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.017946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.021492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.021527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.021538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.024955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.024991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.025002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.028516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.028552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.028563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.031914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.031950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.031960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.035284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.035446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.035460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.038871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.038908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.038919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.637 [2024-11-20 15:29:06.042350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.637 [2024-11-20 15:29:06.042383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.637 [2024-11-20 15:29:06.042394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.045848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.045882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.045892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.049487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.049523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.049535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.052988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.053023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.053034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.638 9203.00 IOPS, 1150.38 MiB/s [2024-11-20T15:29:06.120Z] [2024-11-20 15:29:06.057498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.057537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.057548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.060905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.060940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.060951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.064345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.064381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.064392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.067321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.067488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.069963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.070000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.070011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.073439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.073476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.073487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.077157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.077196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.077206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.080494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.080531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.080542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.082847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.082883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.082894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.086642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.086688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.086699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.090357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.090391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.090401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.093803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.093848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.093858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.096083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.096118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.096129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.099706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.099741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.099751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.103059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.103096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.103108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.105477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.105510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.105520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.108963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.108997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.109007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.638 [2024-11-20 15:29:06.112391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.638 [2024-11-20 15:29:06.112426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.638 [2024-11-20 15:29:06.112437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.116001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.116037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.116047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.118661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.118709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.118720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.121868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.121903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.121913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.125460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.125497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.125508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.128097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.128257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.128271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.131428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.131462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.131484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.134654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.134705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.134717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.137154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.137280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.140708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.140743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.140754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.144094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.144130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.146616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.146654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.146664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.150201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.150236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.150247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.153894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.153928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.153939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.156407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.156441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.156452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.159570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.159608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.159618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.163167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.163204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.163215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.166569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.166605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.166616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.168583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.168616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.168627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.171910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.171946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.171956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.174974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.175011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.175022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.177545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.177701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.177716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.180131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.180168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.180179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.183155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.183192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.183203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.186650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.899 [2024-11-20 15:29:06.186695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.899 [2024-11-20 15:29:06.186706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.899 [2024-11-20 15:29:06.190149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.190184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.190195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.193589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.193757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.193771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.197289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.197326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.197337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.200890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.201042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.201244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.204930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.205091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.205231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.208506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.208686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.208802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.212286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.212455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.212624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.216211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.216362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.216376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.219834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.219870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.219881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.223446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.223485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.223496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.227010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.227048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.227058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.230579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.230738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.230753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.234080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.234114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.234124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.237487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.237521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.237531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.241138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.241175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.241186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.244646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.244691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.244702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.248211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.248248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.248259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.251788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.251822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.251833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.255146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.255182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.255192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.258799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.258837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.258848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.261310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.261344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.261354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.264273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.264310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.264320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.267867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.267904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.267915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.271256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.271409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.271423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.273514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.273547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.273558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.276738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.276773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.900 [2024-11-20 15:29:06.276784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.900 [2024-11-20 15:29:06.279903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.900 [2024-11-20 15:29:06.279939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.279950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.283287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.283323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.283334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.286800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.286835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.286845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.290319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.290369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.290380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.293629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.293814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.293910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.297029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.297063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.297073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.300452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.300488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.300499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.303957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.303991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.304001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.307304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.307357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.307368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.310868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.310901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.310913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.314363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.314496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.314511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.317836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.317864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.317876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.321426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.321460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.321471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.324960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.324995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.325006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.328460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.328497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.328508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.332069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.332106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.332117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.335612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.335649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.335660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.338923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.338959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.338970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.342393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.342531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.342546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.345994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.346029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.346041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.349616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.349651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.349662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.353023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.353058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.353069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.356419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.356455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.356466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.359888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.359922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.359933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.363413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.363449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.901 [2024-11-20 15:29:06.363460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.901 [2024-11-20 15:29:06.367029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.901 [2024-11-20 15:29:06.367066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.902 [2024-11-20 15:29:06.367077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.902 [2024-11-20 15:29:06.370620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.902 [2024-11-20 15:29:06.370771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.902 [2024-11-20 15:29:06.370785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.902 [2024-11-20 15:29:06.374207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:19.902 [2024-11-20 15:29:06.374240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.902 [2024-11-20 15:29:06.374250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.377767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.377909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.378017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.381751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.381911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.382034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.385654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.385698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.385709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.389158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.389195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.389206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.391250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.391285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.391296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.394923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.394961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.394973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.398540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.398701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.398716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.402107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.402142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.402153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.404327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.404364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.404375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.408083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.408120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.408131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.411844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.411881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.411892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.415449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.415484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.415496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.417576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.417610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.417621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.421052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.421086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.421097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.424601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.424637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.424648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.428056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.428091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.428103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.431580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.431618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.431629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.435096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.435128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.435139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.438657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.438807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.438822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.442399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.442435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.442446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.446046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.446196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.163 [2024-11-20 15:29:06.446353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.163 [2024-11-20 15:29:06.449753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.163 [2024-11-20 15:29:06.449900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.449987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.453416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.453565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.453653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.456849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.456999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.457145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.459748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.459827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.459840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.462288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.462332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.462343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.465561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.465597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.465609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.468051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.468087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.468098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.471240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.471276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.471287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.474884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.474922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.474933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.478460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.478497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.478508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.482018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.482053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.482063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.485516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.485551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.485561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.489021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.489057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.489067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.492571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.492742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.492756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.496409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.496447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.496457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.500144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.500309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.503983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.504161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.504251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.507919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.508086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.508176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.511842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.512021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.512148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.515631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.515805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.515897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.519376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.519536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.519551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.523231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.523269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.523280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.526399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.526544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.526558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.530089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.530123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.530134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.533623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.533660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.533670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.537150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.537187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.537198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.540725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.540758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.164 [2024-11-20 15:29:06.540769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.164 [2024-11-20 15:29:06.544002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.164 [2024-11-20 15:29:06.544039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.544050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.546589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.546738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.546752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.549264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.549294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.549305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.552210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.552246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.552257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.555044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.555183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.555198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.557951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.557988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.557999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.560926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.560960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.560971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.563895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.563930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.563940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.567026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.567064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.567075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.570124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.570157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.570167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.572946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.572982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.572993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.575236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.575272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.575284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.578383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.578418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.578428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.581930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.581965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.581976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.585472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.585616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.585630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.589207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.589242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.589253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.592566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.592603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.592613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.595669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.595712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.595724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.598766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.598803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.598814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.601473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.601506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.601517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.604188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.604224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.604234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.607303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.607340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.607351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.609977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.610011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.610021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.612796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.612829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.612840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.616452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.616489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.619746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.619796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.619807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.165 [2024-11-20 15:29:06.622132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.165 [2024-11-20 15:29:06.622164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.165 [2024-11-20 15:29:06.622174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.166 [2024-11-20 15:29:06.625686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.166 [2024-11-20 15:29:06.625718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-11-20 15:29:06.625728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.166 [2024-11-20 15:29:06.629136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.166 [2024-11-20 15:29:06.629172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-11-20 15:29:06.629183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.166 [2024-11-20 15:29:06.631811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.166 [2024-11-20 15:29:06.631844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-11-20 15:29:06.631854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.166 [2024-11-20 15:29:06.634823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.166 [2024-11-20 15:29:06.634860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-11-20 15:29:06.634871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.166 [2024-11-20 15:29:06.637884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.166 [2024-11-20 15:29:06.637917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-11-20 15:29:06.637927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.427 [2024-11-20 15:29:06.640420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.427 [2024-11-20 15:29:06.640569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.427 [2024-11-20 15:29:06.640585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.427 [2024-11-20 15:29:06.643743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.427 [2024-11-20 15:29:06.643777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.427 [2024-11-20 15:29:06.643788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.427 [2024-11-20 15:29:06.646324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.427 [2024-11-20 15:29:06.646376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.427 [2024-11-20 15:29:06.646387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.427 [2024-11-20 15:29:06.649389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.427 [2024-11-20 15:29:06.649536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.427 [2024-11-20 15:29:06.649551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.427 [2024-11-20 15:29:06.652477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.652510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.652520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.655140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.655180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.655191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.658178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.658313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.658343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.661385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.661419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.661430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.663832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.663878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.663888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.667070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.667204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.667218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.669963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.669994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.670005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.672892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.672928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.672938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.675991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.676027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.676037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.678582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.678620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.678631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.681251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.681405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.681419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.684337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.684375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.684386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.687541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.687580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.687591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.690109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.690141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.690151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.693094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.693128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.693138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.695726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.695761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.695772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.699032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.699169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.699183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.701701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.701731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.701742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.704790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.704823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.704834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.708245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.708280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.708291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.711734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.711768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.711778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.714297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.714353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.714365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.717461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.717497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.717507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.720869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.720917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.720929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.724531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.724570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.724581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.727977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.728013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.728023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.731635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.428 [2024-11-20 15:29:06.731775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.428 [2024-11-20 15:29:06.731789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.428 [2024-11-20 15:29:06.735286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.735325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.735336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.738961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.739133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.739217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.742362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.742505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.742520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.746134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.746169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.746180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.749465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.749499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.749509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.752970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.753006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.753016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.756329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.756480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.756494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.760073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.760110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.760120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.763620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.763658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.763669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.767056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.767093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.767104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.770631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.770669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.770691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.774091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.774126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.774136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.777615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.777649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.777659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.781128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.781163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.781173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.784487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.784649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.784663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.788155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.788191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.788201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.791781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.791825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.791836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.795330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.795368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.795379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.798835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.798873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.798883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.802330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.802363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.802374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.805854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.805889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.805899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.809372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.809407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.809418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.812009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.812042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.812053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.815212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.815250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.815261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.818761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.818797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.818807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.822263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.822297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.822315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.824452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.429 [2024-11-20 15:29:06.824486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-11-20 15:29:06.824497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.429 [2024-11-20 15:29:06.828100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.828137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.828148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.830652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.830800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.830815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.833960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.833994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.834004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.837598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.837633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.837644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.841180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.841216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.841227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.843742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.843774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.843785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.846851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.846889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.846900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.850493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.850529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.850539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.853013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.853159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.853191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.856510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.856658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.856764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.860269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.860449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.860539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.864030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.864195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.864282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.867842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.868009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.868092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.871822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.871989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.872108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.875602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.875798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.875892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.879425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.879604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.879723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.883282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.883452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.883467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.887035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.887073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.887084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.890728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.890763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.890773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.894213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.894257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.897601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.897636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.897647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.901083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.901121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.901132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.430 [2024-11-20 15:29:06.904679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.430 [2024-11-20 15:29:06.904725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-11-20 15:29:06.904737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.908225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.908261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.908271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.911842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.912025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.912164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.915875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.916024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.916038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.919672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.919718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.919728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.923066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.923104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.923115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.926684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.926732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.926743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.930314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.930364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.930375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.933964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.933998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.934008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.691 [2024-11-20 15:29:06.937559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.691 [2024-11-20 15:29:06.937698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-11-20 15:29:06.937713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.941213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.941249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.941260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.944783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.944819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.944830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.948156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.948192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.948203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.951762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.951797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.951807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.955311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.955350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.955361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.958736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.958771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.958782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.962248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.962281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.962291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.965687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.965718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.965728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.969190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.969227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.969237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.972650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.972693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.972719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.976203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.976240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.976250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.979757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.979793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.979804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.983291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.983328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.983339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.986880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.986917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.986928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.990206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.990240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.990250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.993732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.993764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.993775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:06.997090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:06.997124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:06.997135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.000566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.000602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.000612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.004147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.004184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.004194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.007486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.007523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.007534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.011080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.011117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.011128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.014424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.014460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.014471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.017945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.017978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.017988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.021484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.021630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.021645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.025098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.025134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.025145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.028664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.028728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-11-20 15:29:07.028739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.692 [2024-11-20 15:29:07.032290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.692 [2024-11-20 15:29:07.032326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-11-20 15:29:07.032336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.693 [2024-11-20 15:29:07.035898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.693 [2024-11-20 15:29:07.035933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-11-20 15:29:07.035943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.693 [2024-11-20 15:29:07.039498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.693 [2024-11-20 15:29:07.039534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-11-20 15:29:07.039545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.693 [2024-11-20 15:29:07.042977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.693 [2024-11-20 15:29:07.043014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-11-20 15:29:07.043025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:20.693 [2024-11-20 15:29:07.046556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.693 [2024-11-20 15:29:07.046593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-11-20 15:29:07.046604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.693 [2024-11-20 15:29:07.050085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.693 [2024-11-20 15:29:07.050118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-11-20 15:29:07.050128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.693 9220.00 IOPS, 1152.50 MiB/s [2024-11-20T15:29:07.175Z] [2024-11-20 15:29:07.054975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b9d10) 00:21:20.693 [2024-11-20 15:29:07.055004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-11-20 15:29:07.055015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.693 00:21:20.693 Latency(us) 00:21:20.693 [2024-11-20T15:29:07.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.693 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:20.693 nvme0n1 : 2.00 9219.44 1152.43 0.00 0.00 1732.63 496.78 5895.61 00:21:20.693 [2024-11-20T15:29:07.175Z] =================================================================================================================== 00:21:20.693 [2024-11-20T15:29:07.175Z] Total : 9219.44 1152.43 0.00 0.00 1732.63 496.78 5895.61 00:21:20.693 { 00:21:20.693 "results": [ 00:21:20.693 { 00:21:20.693 "job": "nvme0n1", 00:21:20.693 "core_mask": "0x2", 00:21:20.693 "workload": "randread", 00:21:20.693 "status": "finished", 00:21:20.693 "queue_depth": 16, 00:21:20.693 "io_size": 131072, 00:21:20.693 "runtime": 2.001856, 00:21:20.693 "iops": 9219.444355637968, 00:21:20.693 "mibps": 1152.430544454746, 00:21:20.693 "io_failed": 0, 00:21:20.693 "io_timeout": 0, 00:21:20.693 "avg_latency_us": 1732.6301826290858, 00:21:20.693 "min_latency_us": 496.7839357429719, 00:21:20.693 "max_latency_us": 5895.608032128514 00:21:20.693 } 00:21:20.693 ], 00:21:20.693 "core_count": 1 00:21:20.693 } 00:21:20.693 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:20.693 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:20.693 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:20.693 | .driver_specific 00:21:20.693 | .nvme_error 00:21:20.693 | .status_code 00:21:20.693 | .command_transient_transport_error' 00:21:20.693 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 596 > 0 )) 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95342 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95342 ']' 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95342 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95342 00:21:20.951 killing process with pid 95342 00:21:20.951 Received shutdown signal, test time was about 2.000000 seconds 00:21:20.951 00:21:20.951 Latency(us) 00:21:20.951 [2024-11-20T15:29:07.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.951 [2024-11-20T15:29:07.433Z] =================================================================================================================== 00:21:20.951 [2024-11-20T15:29:07.433Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:20.951 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:20.952 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95342' 00:21:20.952 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95342 00:21:20.952 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95342 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95431 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95431 /var/tmp/bperf.sock 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95431 ']' 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:21.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.210 15:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:21.210 [2024-11-20 15:29:07.670980] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:21.210 [2024-11-20 15:29:07.671059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95431 ] 00:21:21.468 [2024-11-20 15:29:07.824481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.468 [2024-11-20 15:29:07.885038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.404 15:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.663 nvme0n1 00:21:22.663 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:22.663 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.663 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:22.663 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.664 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:22.664 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:22.923 Running I/O for 2 seconds... 00:21:22.923 [2024-11-20 15:29:09.199515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef35f0 00:21:22.923 [2024-11-20 15:29:09.200499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.200660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.210655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee1f80 00:21:22.923 [2024-11-20 15:29:09.212115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.212262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.217281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee3060 00:21:22.923 [2024-11-20 15:29:09.217996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.218027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.226501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef6cc8 00:21:22.923 [2024-11-20 15:29:09.227347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.227375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.235277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef35f0 00:21:22.923 [2024-11-20 15:29:09.235869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.235892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.245840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ede470 00:21:22.923 [2024-11-20 15:29:09.246718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.246747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.254534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef3e60 00:21:22.923 [2024-11-20 15:29:09.255122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.255147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.263028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee95a0 00:21:22.923 [2024-11-20 15:29:09.263529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.263552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.273313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef3e60 00:21:22.923 [2024-11-20 15:29:09.274540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.274569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.281970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efbcf0 00:21:22.923 [2024-11-20 15:29:09.282947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.283082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.290537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef20d8 00:21:22.923 [2024-11-20 15:29:09.291393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.291425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.299038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef3a28 00:21:22.923 [2024-11-20 15:29:09.299900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.299930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.309769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016edf988 00:21:22.923 [2024-11-20 15:29:09.311129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.311162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.316242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efa3a0 00:21:22.923 [2024-11-20 15:29:09.316844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.316866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.327003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef2948 00:21:22.923 [2024-11-20 15:29:09.328239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.328270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.335504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eec840 00:21:22.923 [2024-11-20 15:29:09.336350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.336383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.344266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee9168 00:21:22.923 [2024-11-20 15:29:09.345153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.345182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.354894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee49b0 00:21:22.923 [2024-11-20 15:29:09.356334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.356364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.361330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee49b0 00:21:22.923 [2024-11-20 15:29:09.361998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.362069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.372144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee9168 00:21:22.923 [2024-11-20 15:29:09.373320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.373450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.380609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef7da8 00:21:22.923 [2024-11-20 15:29:09.381525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.381559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.389412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef2948 00:21:22.923 [2024-11-20 15:29:09.390397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.923 [2024-11-20 15:29:09.390427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.923 [2024-11-20 15:29:09.400145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efa3a0 00:21:22.924 [2024-11-20 15:29:09.401605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.924 [2024-11-20 15:29:09.401635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.406518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016edf988 00:21:23.184 [2024-11-20 15:29:09.407263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.407289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.417236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef3a28 00:21:23.184 [2024-11-20 15:29:09.418614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.418642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.425633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef5be8 00:21:23.184 [2024-11-20 15:29:09.426662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.426704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.434425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0788 00:21:23.184 [2024-11-20 15:29:09.435456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.435487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.443345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee4140 00:21:23.184 [2024-11-20 15:29:09.444040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.444114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.452255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef92c0 00:21:23.184 [2024-11-20 15:29:09.453132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.453165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.460983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb760 00:21:23.184 [2024-11-20 15:29:09.461894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.461924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.471659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0ff8 00:21:23.184 [2024-11-20 15:29:09.473223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.473252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.478195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eef6a8 00:21:23.184 [2024-11-20 15:29:09.478903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.478973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.488953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eed0b0 00:21:23.184 [2024-11-20 15:29:09.490148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.490275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.497459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef7100 00:21:23.184 [2024-11-20 15:29:09.498403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.498436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.506249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee7818 00:21:23.184 [2024-11-20 15:29:09.507244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.184 [2024-11-20 15:29:09.507274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:23.184 [2024-11-20 15:29:09.517130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeaef0 00:21:23.185 [2024-11-20 15:29:09.518621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.518652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.523556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee23b8 00:21:23.185 [2024-11-20 15:29:09.524422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.524445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.532888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eed4e8 00:21:23.185 [2024-11-20 15:29:09.533627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.533770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.543999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee5658 00:21:23.185 [2024-11-20 15:29:09.545381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.545509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.550532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eebfd0 00:21:23.185 [2024-11-20 15:29:09.551187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.551216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.561329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef8618 00:21:23.185 [2024-11-20 15:29:09.562539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.562570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.569707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016edf550 00:21:23.185 [2024-11-20 15:29:09.570625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.570657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.578536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb760 00:21:23.185 [2024-11-20 15:29:09.579351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.579379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.587148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef96f8 00:21:23.185 [2024-11-20 15:29:09.587860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.587929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.595710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee0ea0 00:21:23.185 [2024-11-20 15:29:09.596270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.596292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.606882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeea00 00:21:23.185 [2024-11-20 15:29:09.608244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.608274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.615395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef9f68 00:21:23.185 [2024-11-20 15:29:09.616600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.616630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.623931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee99d8 00:21:23.185 [2024-11-20 15:29:09.625010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.625039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.632378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ede470 00:21:23.185 [2024-11-20 15:29:09.633351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.633380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.640916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef9f68 00:21:23.185 [2024-11-20 15:29:09.641755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.641784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.649341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ede470 00:21:23.185 [2024-11-20 15:29:09.650061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.650091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:23.185 [2024-11-20 15:29:09.657913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee12d8 00:21:23.185 [2024-11-20 15:29:09.658527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.185 [2024-11-20 15:29:09.658552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:23.446 [2024-11-20 15:29:09.668374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eef6a8 00:21:23.446 [2024-11-20 15:29:09.669267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.446 [2024-11-20 15:29:09.669294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:23.446 [2024-11-20 15:29:09.677030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb760 00:21:23.446 [2024-11-20 15:29:09.677624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.446 [2024-11-20 15:29:09.677650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:23.446 [2024-11-20 15:29:09.685499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef1430 00:21:23.446 [2024-11-20 15:29:09.686033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.446 [2024-11-20 15:29:09.686059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:23.446 [2024-11-20 15:29:09.695839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efeb58 00:21:23.446 [2024-11-20 15:29:09.696950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.446 [2024-11-20 15:29:09.696980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:23.446 [2024-11-20 15:29:09.704301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee5a90 00:21:23.446 [2024-11-20 15:29:09.705293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.446 [2024-11-20 15:29:09.705322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.712863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef5378 00:21:23.447 [2024-11-20 15:29:09.713730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.713759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.721282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eed920 00:21:23.447 [2024-11-20 15:29:09.722036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.722066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.729850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eebb98 00:21:23.447 [2024-11-20 15:29:09.730494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.730524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.741040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eea680 00:21:23.447 [2024-11-20 15:29:09.742431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.742568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.747723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efd640 00:21:23.447 [2024-11-20 15:29:09.748470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.748500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.756767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eefae0 00:21:23.447 [2024-11-20 15:29:09.757503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.757533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.765308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee27f0 00:21:23.447 [2024-11-20 15:29:09.765949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.765971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.775824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eed4e8 00:21:23.447 [2024-11-20 15:29:09.776612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.776759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.784341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee2c28 00:21:23.447 [2024-11-20 15:29:09.784983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.785015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.792817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef6020 00:21:23.447 [2024-11-20 15:29:09.793347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.793370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.802967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef7970 00:21:23.447 [2024-11-20 15:29:09.804256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.804376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.811592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efef90 00:21:23.447 [2024-11-20 15:29:09.812615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.812646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.820468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee12d8 00:21:23.447 [2024-11-20 15:29:09.821621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.821651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.828895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeea00 00:21:23.447 [2024-11-20 15:29:09.829763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.829794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.837757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee1b48 00:21:23.447 [2024-11-20 15:29:09.838797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.838825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.848746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef35f0 00:21:23.447 [2024-11-20 15:29:09.850276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.850312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.855301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0788 00:21:23.447 [2024-11-20 15:29:09.855990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.856019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.866132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb760 00:21:23.447 [2024-11-20 15:29:09.867338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.867468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.874659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef8618 00:21:23.447 [2024-11-20 15:29:09.875582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.875614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.883528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef7100 00:21:23.447 [2024-11-20 15:29:09.884514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.884543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.894392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef96f8 00:21:23.447 [2024-11-20 15:29:09.895898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.895926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.900829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee2c28 00:21:23.447 [2024-11-20 15:29:09.901439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.901468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.912233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb328 00:21:23.447 [2024-11-20 15:29:09.913622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.913763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:23.447 [2024-11-20 15:29:09.918930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016edf988 00:21:23.447 [2024-11-20 15:29:09.919784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.447 [2024-11-20 15:29:09.919808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:23.707 [2024-11-20 15:29:09.929878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef2510 00:21:23.707 [2024-11-20 15:29:09.931167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.707 [2024-11-20 15:29:09.931199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:23.707 [2024-11-20 15:29:09.938316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee6300 00:21:23.707 [2024-11-20 15:29:09.939315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.707 [2024-11-20 15:29:09.939347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:23.707 [2024-11-20 15:29:09.947219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee4140 00:21:23.707 [2024-11-20 15:29:09.948249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.707 [2024-11-20 15:29:09.948278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:23.707 [2024-11-20 15:29:09.955643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee88f8 00:21:23.707 [2024-11-20 15:29:09.956395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.707 [2024-11-20 15:29:09.956427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:23.707 [2024-11-20 15:29:09.964490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eef270 00:21:23.707 [2024-11-20 15:29:09.965320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.707 [2024-11-20 15:29:09.965345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:23.707 [2024-11-20 15:29:09.973545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee7818 00:21:23.707 [2024-11-20 15:29:09.973990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.707 [2024-11-20 15:29:09.974013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:23.707 [2024-11-20 15:29:09.983951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eebfd0 00:21:23.707 [2024-11-20 15:29:09.985138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.707 [2024-11-20 15:29:09.985276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:23.707 [2024-11-20 15:29:09.992709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee3498 00:21:23.707 [2024-11-20 15:29:09.993663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:09.993705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.001277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef57b0 00:21:23.708 [2024-11-20 15:29:10.002094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.002124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.010186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eedd58 00:21:23.708 [2024-11-20 15:29:10.010903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.010957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.020155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016edf118 00:21:23.708 [2024-11-20 15:29:10.021242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.021273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.028762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef2510 00:21:23.708 [2024-11-20 15:29:10.029738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.029767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.037454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ede038 00:21:23.708 [2024-11-20 15:29:10.038294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.038331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.046136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eee5c8 00:21:23.708 [2024-11-20 15:29:10.046876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.046905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.054686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efbcf0 00:21:23.708 [2024-11-20 15:29:10.055289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.055312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.065294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016edf550 00:21:23.708 [2024-11-20 15:29:10.066030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.066103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.073840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef20d8 00:21:23.708 [2024-11-20 15:29:10.074596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.074625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.082528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef8618 00:21:23.708 [2024-11-20 15:29:10.083031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.083057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.092763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef6cc8 00:21:23.708 [2024-11-20 15:29:10.094005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.094032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.101465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb328 00:21:23.708 [2024-11-20 15:29:10.102480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.102511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.110007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee88f8 00:21:23.708 [2024-11-20 15:29:10.110889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.110919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.118488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee38d0 00:21:23.708 [2024-11-20 15:29:10.119256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.119286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.127059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee1710 00:21:23.708 [2024-11-20 15:29:10.127696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.127726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.138336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee84c0 00:21:23.708 [2024-11-20 15:29:10.139735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.139763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.146940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0350 00:21:23.708 [2024-11-20 15:29:10.148239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.148269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.155526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef7100 00:21:23.708 [2024-11-20 15:29:10.156809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.156835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.162876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee6b70 00:21:23.708 [2024-11-20 15:29:10.163617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.163642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.173526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee84c0 00:21:23.708 [2024-11-20 15:29:10.174354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.708 [2024-11-20 15:29:10.174386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:23.708 [2024-11-20 15:29:10.182919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef9b30 00:21:23.708 27942.00 IOPS, 109.15 MiB/s [2024-11-20T15:29:10.191Z] [2024-11-20 15:29:10.184164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.709 [2024-11-20 15:29:10.184191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:23.968 [2024-11-20 15:29:10.191870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee6300 00:21:23.968 [2024-11-20 15:29:10.192889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.968 [2024-11-20 15:29:10.193021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:23.968 [2024-11-20 15:29:10.200258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeee38 00:21:23.968 [2024-11-20 15:29:10.200986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.968 [2024-11-20 15:29:10.201006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:23.968 [2024-11-20 15:29:10.209168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeaab8 00:21:23.968 [2024-11-20 15:29:10.210104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.968 [2024-11-20 15:29:10.210251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:23.968 [2024-11-20 15:29:10.220295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0350 00:21:23.969 [2024-11-20 15:29:10.221761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.221905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.227044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef9b30 00:21:23.969 [2024-11-20 15:29:10.227756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.227899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.238126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efc128 00:21:23.969 [2024-11-20 15:29:10.239235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.239381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.249193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016edf988 00:21:23.969 [2024-11-20 15:29:10.250816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.250945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.255901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efef90 00:21:23.969 [2024-11-20 15:29:10.256749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.256776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.265086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef4f40 00:21:23.969 [2024-11-20 15:29:10.265814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.265884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.273653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efc128 00:21:23.969 [2024-11-20 15:29:10.274292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.274330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.284215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016edf988 00:21:23.969 [2024-11-20 15:29:10.284990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.285121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.292721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef4b08 00:21:23.969 [2024-11-20 15:29:10.293327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.293371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.301190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eed920 00:21:23.969 [2024-11-20 15:29:10.301708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.301731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.311453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee0ea0 00:21:23.969 [2024-11-20 15:29:10.312713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.312740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.320132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efac10 00:21:23.969 [2024-11-20 15:29:10.321129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.321160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.328668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee4578 00:21:23.969 [2024-11-20 15:29:10.329553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.329583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.337194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eddc00 00:21:23.969 [2024-11-20 15:29:10.337950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.337979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.345795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee6fa8 00:21:23.969 [2024-11-20 15:29:10.346555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.346583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.357206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef9b30 00:21:23.969 [2024-11-20 15:29:10.358761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.358792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.363543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ede470 00:21:23.969 [2024-11-20 15:29:10.364224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.364297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.374343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efdeb0 00:21:23.969 [2024-11-20 15:29:10.375515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.375645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.382860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee1710 00:21:23.969 [2024-11-20 15:29:10.383747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.383779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.391674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efdeb0 00:21:23.969 [2024-11-20 15:29:10.392589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.392618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.402327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ede470 00:21:23.969 [2024-11-20 15:29:10.403901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.403925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.408896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0350 00:21:23.969 [2024-11-20 15:29:10.409719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.969 [2024-11-20 15:29:10.409744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:23.969 [2024-11-20 15:29:10.419720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef3e60 00:21:23.970 [2024-11-20 15:29:10.420940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.970 [2024-11-20 15:29:10.420969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:23.970 [2024-11-20 15:29:10.428023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efda78 00:21:23.970 [2024-11-20 15:29:10.428935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.970 [2024-11-20 15:29:10.428966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:23.970 [2024-11-20 15:29:10.436851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efe720 00:21:23.970 [2024-11-20 15:29:10.437784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.970 [2024-11-20 15:29:10.437814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:23.970 [2024-11-20 15:29:10.445627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeea00 00:21:23.970 [2024-11-20 15:29:10.446627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.970 [2024-11-20 15:29:10.446657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.456405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eec840 00:21:24.230 [2024-11-20 15:29:10.457912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.458040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.462959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eebb98 00:21:24.230 [2024-11-20 15:29:10.463823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.463850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.473738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee84c0 00:21:24.230 [2024-11-20 15:29:10.475131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.475160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.482951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee0a68 00:21:24.230 [2024-11-20 15:29:10.484250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.484280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.491526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efb8b8 00:21:24.230 [2024-11-20 15:29:10.492710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.492837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.500127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee9168 00:21:24.230 [2024-11-20 15:29:10.501101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.501134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.508905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efa7d8 00:21:24.230 [2024-11-20 15:29:10.510065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.510091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.517324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee0ea0 00:21:24.230 [2024-11-20 15:29:10.518083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.518115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.526116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeea00 00:21:24.230 [2024-11-20 15:29:10.526935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.526961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.536871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb760 00:21:24.230 [2024-11-20 15:29:10.538185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.538213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.543260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee3498 00:21:24.230 [2024-11-20 15:29:10.543837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.543858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.554100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0ff8 00:21:24.230 [2024-11-20 15:29:10.555191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.555218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.562518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef5378 00:21:24.230 [2024-11-20 15:29:10.563313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.563342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.571376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef96f8 00:21:24.230 [2024-11-20 15:29:10.572233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.572260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.582200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb328 00:21:24.230 [2024-11-20 15:29:10.583591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.583618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.591395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efb480 00:21:24.230 [2024-11-20 15:29:10.592775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.592801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.599976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee4140 00:21:24.230 [2024-11-20 15:29:10.601252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.601279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.608489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee3d08 00:21:24.230 [2024-11-20 15:29:10.609629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.609656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.617141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee5a90 00:21:24.230 [2024-11-20 15:29:10.618173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.618201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.625651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef1ca0 00:21:24.230 [2024-11-20 15:29:10.626560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.230 [2024-11-20 15:29:10.626588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:24.230 [2024-11-20 15:29:10.634158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef7538 00:21:24.231 [2024-11-20 15:29:10.634961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.634988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:24.231 [2024-11-20 15:29:10.642611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef35f0 00:21:24.231 [2024-11-20 15:29:10.643277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.643304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:24.231 [2024-11-20 15:29:10.651166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef6020 00:21:24.231 [2024-11-20 15:29:10.651723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.651745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:24.231 [2024-11-20 15:29:10.661756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef8a50 00:21:24.231 [2024-11-20 15:29:10.662455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.662482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:24.231 [2024-11-20 15:29:10.670254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee6b70 00:21:24.231 [2024-11-20 15:29:10.670822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.670844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:24.231 [2024-11-20 15:29:10.678870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee6b70 00:21:24.231 [2024-11-20 15:29:10.679321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.679344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:24.231 [2024-11-20 15:29:10.689067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef8618 00:21:24.231 [2024-11-20 15:29:10.690123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.690151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:24.231 [2024-11-20 15:29:10.696269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eedd58 00:21:24.231 [2024-11-20 15:29:10.696819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.696842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:24.231 [2024-11-20 15:29:10.707556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef1ca0 00:21:24.231 [2024-11-20 15:29:10.708885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.231 [2024-11-20 15:29:10.708911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.716142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee3d08 00:21:24.491 [2024-11-20 15:29:10.717336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.491 [2024-11-20 15:29:10.717363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.724705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eee5c8 00:21:24.491 [2024-11-20 15:29:10.725780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.491 [2024-11-20 15:29:10.725807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.733316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eef6a8 00:21:24.491 [2024-11-20 15:29:10.734253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.491 [2024-11-20 15:29:10.734280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.741865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eebb98 00:21:24.491 [2024-11-20 15:29:10.742696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.491 [2024-11-20 15:29:10.742723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.750290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee12d8 00:21:24.491 [2024-11-20 15:29:10.751016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.491 [2024-11-20 15:29:10.751042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.758785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee99d8 00:21:24.491 [2024-11-20 15:29:10.759372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.491 [2024-11-20 15:29:10.759394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.767546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee3498 00:21:24.491 [2024-11-20 15:29:10.768135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.491 [2024-11-20 15:29:10.768158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.778250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef2948 00:21:24.491 [2024-11-20 15:29:10.779362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.491 [2024-11-20 15:29:10.779390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:24.491 [2024-11-20 15:29:10.786634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efda78 00:21:24.491 [2024-11-20 15:29:10.787454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.787483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.795414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef7da8 00:21:24.492 [2024-11-20 15:29:10.796303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.796330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.806072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee6b70 00:21:24.492 [2024-11-20 15:29:10.807471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.807498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.812531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee27f0 00:21:24.492 [2024-11-20 15:29:10.813178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.813202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.823218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef5378 00:21:24.492 [2024-11-20 15:29:10.824396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.824423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.831548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee12d8 00:21:24.492 [2024-11-20 15:29:10.832413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.832442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.840312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efa3a0 00:21:24.492 [2024-11-20 15:29:10.841234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.841261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.850984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efeb58 00:21:24.492 [2024-11-20 15:29:10.852459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.852485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.857405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee5658 00:21:24.492 [2024-11-20 15:29:10.858109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.858137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.868079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeee38 00:21:24.492 [2024-11-20 15:29:10.869290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.869316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.876360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee1f80 00:21:24.492 [2024-11-20 15:29:10.877297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.877325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.885076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef20d8 00:21:24.492 [2024-11-20 15:29:10.886066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.886094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.895749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee9168 00:21:24.492 [2024-11-20 15:29:10.897245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.897272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.902100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef6cc8 00:21:24.492 [2024-11-20 15:29:10.902886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.902912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.912844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef8e88 00:21:24.492 [2024-11-20 15:29:10.914113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.914139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.921143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef6020 00:21:24.492 [2024-11-20 15:29:10.922141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.922169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.929839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eec408 00:21:24.492 [2024-11-20 15:29:10.930909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.930935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.938166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee3498 00:21:24.492 [2024-11-20 15:29:10.938969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.938996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.947014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0788 00:21:24.492 [2024-11-20 15:29:10.947858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.947884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.957624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efda78 00:21:24.492 [2024-11-20 15:29:10.958990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.959018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:24.492 [2024-11-20 15:29:10.964040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee1f80 00:21:24.492 [2024-11-20 15:29:10.964627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.492 [2024-11-20 15:29:10.964652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:10.974655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef5be8 00:21:24.753 [2024-11-20 15:29:10.975788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:10.975814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:10.982941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee27f0 00:21:24.753 [2024-11-20 15:29:10.983802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:10.983829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:10.991746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee88f8 00:21:24.753 [2024-11-20 15:29:10.992625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:10.992652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:11.002582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee12d8 00:21:24.753 [2024-11-20 15:29:11.003985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:11.004012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:11.009005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee12d8 00:21:24.753 [2024-11-20 15:29:11.009654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:11.009687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:11.019847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee88f8 00:21:24.753 [2024-11-20 15:29:11.021015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:11.021041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:11.028258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee5658 00:21:24.753 [2024-11-20 15:29:11.029168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:11.029196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:11.037112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef5be8 00:21:24.753 [2024-11-20 15:29:11.038058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:11.038084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:11.047973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee1f80 00:21:24.753 [2024-11-20 15:29:11.049460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:11.049488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:11.054445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeb328 00:21:24.753 [2024-11-20 15:29:11.055040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.753 [2024-11-20 15:29:11.055109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:24.753 [2024-11-20 15:29:11.065838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eee190 00:21:24.754 [2024-11-20 15:29:11.067192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.067220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.074408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef7100 00:21:24.754 [2024-11-20 15:29:11.075622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.075649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.083011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efbcf0 00:21:24.754 [2024-11-20 15:29:11.084119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.084146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.091618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee99d8 00:21:24.754 [2024-11-20 15:29:11.092599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.092626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.100273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef4298 00:21:24.754 [2024-11-20 15:29:11.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.101176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.110844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eeff18 00:21:24.754 [2024-11-20 15:29:11.112191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.112217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.117282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efcdd0 00:21:24.754 [2024-11-20 15:29:11.117911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.117938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.128133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ee95a0 00:21:24.754 [2024-11-20 15:29:11.129257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.129284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.136569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eed4e8 00:21:24.754 [2024-11-20 15:29:11.137397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.137426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.145422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016eebb98 00:21:24.754 [2024-11-20 15:29:11.146332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.146359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.156219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016efda78 00:21:24.754 [2024-11-20 15:29:11.157646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.157682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.162659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef0ff8 00:21:24.754 [2024-11-20 15:29:11.163332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.163358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:24.754 [2024-11-20 15:29:11.173433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196020) with pdu=0x200016ef4298 00:21:24.754 [2024-11-20 15:29:11.174640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.754 [2024-11-20 15:29:11.174668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:24.754 28103.00 IOPS, 109.78 MiB/s 00:21:24.754 Latency(us) 00:21:24.754 [2024-11-20T15:29:11.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.754 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:24.754 nvme0n1 : 2.00 28134.22 109.90 0.00 0.00 4545.63 1842.38 13896.79 00:21:24.754 [2024-11-20T15:29:11.236Z] =================================================================================================================== 00:21:24.754 [2024-11-20T15:29:11.236Z] Total : 28134.22 109.90 0.00 0.00 4545.63 1842.38 13896.79 00:21:24.754 { 00:21:24.754 "results": [ 00:21:24.754 { 00:21:24.754 "job": "nvme0n1", 00:21:24.754 "core_mask": "0x2", 00:21:24.754 "workload": "randwrite", 00:21:24.754 "status": "finished", 00:21:24.754 "queue_depth": 128, 00:21:24.754 "io_size": 4096, 00:21:24.754 "runtime": 2.00233, 00:21:24.754 "iops": 28134.223629471664, 00:21:24.754 "mibps": 109.89931105262369, 00:21:24.754 "io_failed": 0, 00:21:24.754 "io_timeout": 0, 00:21:24.754 "avg_latency_us": 4545.633801610389, 00:21:24.754 "min_latency_us": 1842.3775100401606, 00:21:24.754 "max_latency_us": 13896.790361445783 00:21:24.754 } 00:21:24.754 ], 00:21:24.754 "core_count": 1 00:21:24.754 } 00:21:24.754 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:24.754 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:24.754 | .driver_specific 00:21:24.754 | .nvme_error 00:21:24.754 | .status_code 00:21:24.754 | .command_transient_transport_error' 00:21:24.754 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:24.754 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:25.014 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:21:25.014 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95431 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95431 ']' 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95431 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95431 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:25.015 killing process with pid 95431 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95431' 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95431 00:21:25.015 Received shutdown signal, test time was about 2.000000 seconds 00:21:25.015 00:21:25.015 Latency(us) 00:21:25.015 [2024-11-20T15:29:11.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.015 [2024-11-20T15:29:11.497Z] =================================================================================================================== 00:21:25.015 [2024-11-20T15:29:11.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.015 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95431 00:21:25.274 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:25.274 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:25.274 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:25.274 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95517 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95517 /var/tmp/bperf.sock 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95517 ']' 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:25.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.275 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:25.534 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:25.534 Zero copy mechanism will not be used. 00:21:25.534 [2024-11-20 15:29:11.805644] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:25.534 [2024-11-20 15:29:11.805749] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95517 ] 00:21:25.534 [2024-11-20 15:29:11.959690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.794 [2024-11-20 15:29:12.022539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.362 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.362 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:26.362 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:26.362 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:26.622 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:26.622 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.622 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:26.622 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.622 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.622 15:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.883 nvme0n1 00:21:26.883 15:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:26.883 15:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.883 15:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:26.883 15:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.883 15:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:26.883 15:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:26.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:26.883 Zero copy mechanism will not be used. 00:21:26.883 Running I/O for 2 seconds... 00:21:26.883 [2024-11-20 15:29:13.311498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.311673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.311716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.316367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.316511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.316536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.320997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.321174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.321204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.325280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.325449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.325477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.329517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.329715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.329736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.333698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.333893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.333914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.337929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.338110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.338130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.342181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.342360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.342380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.346445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.346610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.346630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.350715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.350888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.350907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.354948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.355132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.355151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:26.883 [2024-11-20 15:29:13.359192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:26.883 [2024-11-20 15:29:13.359375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.883 [2024-11-20 15:29:13.359395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.363369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.363566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.363587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.367615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.367805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.367826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.371901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.372084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.372105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.376144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.376322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.376342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.380437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.380596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.380617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.384653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.384835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.384855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.388856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.389058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.389078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.393047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.393226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.393245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.397215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.397412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.397431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.401401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.401596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.401615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.405627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.405816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.405836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.409858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.410040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.410060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.414125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.414299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.414327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.418376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.418555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.418575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.422619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.422777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.422797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.426798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.426980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.426999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.430982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.431164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.431183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.435187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.435323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.435343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.439433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.439610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.439629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.443652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.443848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.443867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.447879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.448046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.448065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.452059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.452221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.452241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.456258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.456446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.456467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.460478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.145 [2024-11-20 15:29:13.460670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.145 [2024-11-20 15:29:13.460700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.145 [2024-11-20 15:29:13.464687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.464874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.464894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.468866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.469053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.469073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.473034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.473226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.473245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.477256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.477430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.477449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.481557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.481765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.481785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.485775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.485964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.485983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.490038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.490209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.490228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.494251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.494400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.494419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.498461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.498641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.498661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.502694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.502863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.502882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.506893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.507064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.507083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.511109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.511248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.511268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.515265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.515471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.515490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.519554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.519759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.519783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.523785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.523960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.523979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.528031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.528196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.528215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.532155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.532363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.532382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.536344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.536516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.536536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.540717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.540914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.540933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.544939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.545135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.545155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.549139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.549309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.549328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.553292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.553463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.553483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.557539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.557742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.557762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.561779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.561977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.561996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.565943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.566135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.566154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.570124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.570295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.570323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.574283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.146 [2024-11-20 15:29:13.574485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.146 [2024-11-20 15:29:13.574504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.146 [2024-11-20 15:29:13.578536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.578728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.578748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.582756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.582945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.582964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.587053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.587248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.587269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.591326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.591510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.591529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.595617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.595767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.595786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.599884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.600055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.600074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.604097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.604268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.604286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.608291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.608467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.608486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.612443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.612631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.612650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.616648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.616843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.616862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.147 [2024-11-20 15:29:13.620821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.147 [2024-11-20 15:29:13.621034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.147 [2024-11-20 15:29:13.621053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.624989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.625165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.625184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.629214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.629386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.629405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.633341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.633516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.633535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.637553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.637764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.637784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.641712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.641892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.641911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.645918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.646097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.646123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.650078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.650259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.650285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.654254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.654454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.654479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.658470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.658652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.658687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.662737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.662900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.662925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.666957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.667118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.667143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.671117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.671302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.671321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.408 [2024-11-20 15:29:13.675328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.408 [2024-11-20 15:29:13.675492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.408 [2024-11-20 15:29:13.675517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.679592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.679784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.679809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.683810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.683993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.684019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.688000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.688172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.688197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.692232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.692377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.692403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.696430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.696624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.696644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.700564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.700773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.700793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.704762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.704985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.705004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.708952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.709108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.709127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.713205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.713372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.713392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.717368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.717566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.717585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.721640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.721827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.721847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.725838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.726011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.726029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.730024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.730201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.730220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.734272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.734494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.734514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.738516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.738703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.738723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.742756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.742939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.742958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.746976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.747151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.747170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.751146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.751329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.751348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.755408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.755592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.755611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.759636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.759831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.759850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.763793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.763985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.764005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.768010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.768187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.768207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.772218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.772378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.772397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.776442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.776629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.776648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.780579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.780785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.780805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.784712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.784929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.784948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.409 [2024-11-20 15:29:13.788811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.409 [2024-11-20 15:29:13.788988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.409 [2024-11-20 15:29:13.789008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.792974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.793171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.793191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.797176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.797349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.797368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.801417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.801597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.801616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.805655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.805847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.805866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.809818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.810015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.810034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.813988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.814178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.814197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.818167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.818372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.818390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.822400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.822589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.822608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.826612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.826811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.826831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.830848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.831040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.831060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.834974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.835168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.835187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.839147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.839306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.839326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.843361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.843531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.843550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.847628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.847818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.847837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.851807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.851986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.852005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.855960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.856115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.856133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.860104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.860298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.860316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.864302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.864493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.864513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.868518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.868702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.868722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.872716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.872873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.872892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.876860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.877070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.877089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.881092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.881277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.881297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.410 [2024-11-20 15:29:13.885269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.410 [2024-11-20 15:29:13.885424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.410 [2024-11-20 15:29:13.885443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.889510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.889715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.889734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.893716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.893925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.893943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.897973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.898164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.898182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.902129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.902327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.902346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.906294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.906487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.906506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.910483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.910651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.910671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.914549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.914778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.914798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.918717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.918920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.918939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.922874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.923066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.923085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.927043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.927233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.927252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.931299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.931502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.931522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.935498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.935689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.935709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.939664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.939853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.939873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.943907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.944086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.944105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.948088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.948257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.948277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.952259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.952435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.952454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.956414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.956577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.956596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.960560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.960751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.960770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.964744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.672 [2024-11-20 15:29:13.964926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.672 [2024-11-20 15:29:13.964945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.672 [2024-11-20 15:29:13.968895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:13.969072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:13.969091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:13.973046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:13.973223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:13.973243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:13.977183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:13.977337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:13.977356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:13.981367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:13.981547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:13.981566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:13.985563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:13.985770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:13.985790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:13.989767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:13.989961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:13.989980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:13.993957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:13.994168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:13.994187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:13.998113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:13.998322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:13.998341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.002327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.002485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.002505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.006534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.006745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.006765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.010773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.010955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.010974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.014926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.015088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.015108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.019145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.019327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.019347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.023383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.023573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.023593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.027542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.027722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.027758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.031782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.031962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.031980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.035937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.036099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.036118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.040092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.040268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.040288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.044176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.044353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.044372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.048365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.048502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.048522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.052577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.052762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.052782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.056707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.056885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.056904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.060952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.061132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.061151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.065144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.065305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.065324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.069296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.069467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.069486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.073523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.073739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.073758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.077655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.673 [2024-11-20 15:29:14.077849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.673 [2024-11-20 15:29:14.077869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.673 [2024-11-20 15:29:14.081825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.082021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.082040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.085970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.086162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.086181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.090137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.090285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.090313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.094293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.094494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.094513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.098515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.098680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.098712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.102756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.102937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.102956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.106866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.107055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.107076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.111070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.111250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.111269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.115209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.115407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.115426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.119488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.119658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.119677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.123776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.123965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.123984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.128008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.128199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.128218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.132246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.132421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.132439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.136408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.136587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.136606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.140596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.140787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.140806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.144730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.144917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.144936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.674 [2024-11-20 15:29:14.148887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.674 [2024-11-20 15:29:14.149062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.674 [2024-11-20 15:29:14.149081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.152998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.153134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.153152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.157186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.157368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.157386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.161388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.161577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.161596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.165560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.165762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.165781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.169801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.169991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.170010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.174044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.174194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.174214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.178226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.178428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.182455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.182637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.182656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.186632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.186790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.186809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.190692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.190847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.190866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.194903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.195079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.195098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.199168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.199306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.199326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.203466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.203621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.203640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.207652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.207834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.207853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.211821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.211970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.211990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.216072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.216243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.216262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.220263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.220437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.220456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.224498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.224667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.224697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.228669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.228855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.228874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.232881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.233067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.936 [2024-11-20 15:29:14.233086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.936 [2024-11-20 15:29:14.237033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.936 [2024-11-20 15:29:14.237199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.237217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.241216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.241406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.241425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.245311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.245496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.245514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.249450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.249626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.249644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.253610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.253811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.253831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.257796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.257986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.258004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.261946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.262136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.262155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.266133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.266334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.266353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.270359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.270535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.270554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.274524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.274707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.274726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.278648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.278841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.278860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.282797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.282965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.282984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.286917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.287114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.287133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.291174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.291346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.291366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.295442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.295611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.295630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.299618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.299805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.299824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.937 7314.00 IOPS, 914.25 MiB/s [2024-11-20T15:29:14.419Z] [2024-11-20 15:29:14.304908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.305143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.305163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.309056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.309235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.309255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.313188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.313381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.313401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.317351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.317517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.317536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.321504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.321684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.321720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.325604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.325810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.325830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.329762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.329958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.329977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.333936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.334132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.334151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.338081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.338267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.338286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.342233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.342417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.342436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.346351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.346505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.937 [2024-11-20 15:29:14.346524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.937 [2024-11-20 15:29:14.350526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.937 [2024-11-20 15:29:14.350721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.350740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.354724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.354921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.354940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.358900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.359062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.359082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.363053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.363332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.363350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.367172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.367442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.367460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.371339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.371533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.371552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.375504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.375712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.375746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.379739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.379930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.379949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.383877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.384071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.384090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.388021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.388209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.388227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.392201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.392401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.392420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.396385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.396576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.396594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.400550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.400757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.400777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.404698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.404891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.404909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.408839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.409030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.409048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:27.938 [2024-11-20 15:29:14.412876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:27.938 [2024-11-20 15:29:14.413065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.938 [2024-11-20 15:29:14.413083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.416990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.417215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.417235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.421164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.421357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.421375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.425388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.425589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.425609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.429510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.429725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.429745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.433639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.433865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.433884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.437768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.437978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.437997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.441881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.442091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.442109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.446000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.446205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.446224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.450208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.450422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.450441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.454423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.454618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.454637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.458580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.458802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.458821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.462709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.462905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.462924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.466839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.467014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.467033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.471080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.471251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.471271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.475272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.475448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.475477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.479511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.479689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.479708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.483615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.483867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.483886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.487725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.487889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.199 [2024-11-20 15:29:14.487908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.199 [2024-11-20 15:29:14.491892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.199 [2024-11-20 15:29:14.492066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.492085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.496095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.496273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.496291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.500289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.500459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.500477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.504462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.504629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.504648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.508721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.508914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.508934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.512919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.513104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.513122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.517171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.517347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.517367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.521413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.521587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.521606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.525687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.525871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.525890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.529968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.530141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.530160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.534176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.534326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.534345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.538403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.538578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.538597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.542600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.542786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.542806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.546815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.547020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.547039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.551025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.551209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.551227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.555248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.555419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.555438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.559514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.559697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.559717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.563716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.563914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.567978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.568154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.568174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.572206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.572384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.572403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.576360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.576540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.576558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.580548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.580738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.580758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.200 [2024-11-20 15:29:14.584727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.200 [2024-11-20 15:29:14.584921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.200 [2024-11-20 15:29:14.584940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.588936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.589091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.589110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.593146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.593342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.593361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.597407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.597588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.597607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.601646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.601844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.601864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.605856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.606066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.606084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.610084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.610275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.610294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.614208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.614411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.614430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.618477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.618661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.618680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.622689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.622879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.622898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.626910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.627088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.627108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.631129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.631307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.631326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.635321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.635503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.635521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.639530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.639669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.639699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.643770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.643940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.643958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.647958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.648131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.648150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.652124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.652319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.652338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.656342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.656519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.656538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.660621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.660838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.660858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.664795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.664958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.664977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.668986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.669160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.669179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.673225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.673416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.673435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.201 [2024-11-20 15:29:14.677470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.201 [2024-11-20 15:29:14.677641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.201 [2024-11-20 15:29:14.677660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.681594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.681812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.681837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.685832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.686015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.686041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.689993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.690184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.690210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.694129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.694313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.694332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.698343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.698485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.698504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.702555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.702746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.702765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.706790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.706954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.706973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.710929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.711108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.711126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.463 [2024-11-20 15:29:14.715099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.463 [2024-11-20 15:29:14.715245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.463 [2024-11-20 15:29:14.715263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.719293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.719462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.719481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.723441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.723619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.723637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.727675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.727877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.727896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.731979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.732173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.732191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.736240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.736437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.736456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.740373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.740551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.740569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.744563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.744751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.744770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.748754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.748931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.748949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.752939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.753117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.753135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.757079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.757261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.757279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.761187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.761345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.761363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.765256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.765426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.765445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.769448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.769632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.769652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.773632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.773832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.773851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.777826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.778021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.778040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.781949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.782104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.782123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.786123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.786330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.786349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.790316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.790502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.790521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.794527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.794715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.794736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.798737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.798913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.798932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.802884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.803086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.803105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.807163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.807328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.807347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.811423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.811604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.811623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.815702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.815892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.815911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.819962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.820162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.820181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.824183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.824360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.464 [2024-11-20 15:29:14.824378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.464 [2024-11-20 15:29:14.828381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.464 [2024-11-20 15:29:14.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.828533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.832579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.832791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.832811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.836827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.837003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.837021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.840960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.841121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.841140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.845158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.845338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.845356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.849316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.849484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.849502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.853514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.853708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.853746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.857754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.857915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.857934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.861885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.862065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.862083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.865982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.866168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.866187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.870159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.870354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.870373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.874358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.874541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.874561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.878582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.878772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.878791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.882764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.882963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.882982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.886918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.887109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.887128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.891142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.891336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.891355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.895376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.895569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.895588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.899533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.899760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.899779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.903596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.903784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.903803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.907860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.908020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.908039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.912073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.912252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.912270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.916264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.916452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.916471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.920444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.920621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.920639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.924583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.924774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.924793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.928743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.928936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.928954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.932928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.933099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.933118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.937089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.937287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.465 [2024-11-20 15:29:14.937307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.465 [2024-11-20 15:29:14.941302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.465 [2024-11-20 15:29:14.941468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.466 [2024-11-20 15:29:14.941487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.945463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.945655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.945684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.949563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.949765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.949786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.953858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.954050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.954068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.958011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.958186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.958204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.962214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.962417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.962436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.966369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.966560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.966579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.970605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.970826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.970845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.974718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.974928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.974947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.978907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.979078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.979097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.983112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.983272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.983291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.987212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.987394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.987413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.991410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.991618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.991637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.995629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.995818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:14.995837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:14.999826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:14.999983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:15.000002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.780 [2024-11-20 15:29:15.003994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.780 [2024-11-20 15:29:15.004163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.780 [2024-11-20 15:29:15.004182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.008178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.008365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.008384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.012350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.012548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.012567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.016545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.016756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.016775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.020704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.020877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.020896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.024856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.025031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.025050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.028992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.029167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.029185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.033165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.033342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.033360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.037297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.037504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.037523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.041381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.041547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.041566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.045568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.045760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.045778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.049767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.049941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.049959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.053923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.054100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.054120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.058076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.058250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.058269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.062251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.062448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.062468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.066422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.066615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.066634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.070661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.070856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.070876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.074869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.075053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.075071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.079037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.079195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.079214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.083246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.083426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.083444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.087446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.087605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.087624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.091638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.091830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.091850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.095886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.096065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.096084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.100117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.100291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.100309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.104331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.104508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.104526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.108511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.108692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.108711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.112738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.112912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.112931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.116895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.781 [2024-11-20 15:29:15.117082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.781 [2024-11-20 15:29:15.117101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.781 [2024-11-20 15:29:15.121077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.121250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.121269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.125258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.125432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.125450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.129434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.129602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.129621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.133582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.133763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.133782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.137810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.137981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.137999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.141927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.142100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.142118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.146146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.146350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.146370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.150266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.150449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.150468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.154451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.154629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.154648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.158654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.158847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.158866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.162876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.163044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.163063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.167033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.167199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.167218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.171203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.171401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.171420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.175423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.175583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.175602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.179707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.179927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.179946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.183937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.184113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.184131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.188123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.188301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.188320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.192309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.192481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.192500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.196517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.196705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.196740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.200662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.200847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.200865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.204802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.204979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.204998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.209015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.209202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.209221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.213236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.213412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.213430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.217440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.217615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.217634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.221595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.221802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.221821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.225794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.225987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.226006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.230007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.782 [2024-11-20 15:29:15.230174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.782 [2024-11-20 15:29:15.230193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.782 [2024-11-20 15:29:15.234193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.783 [2024-11-20 15:29:15.234402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.783 [2024-11-20 15:29:15.234421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.783 [2024-11-20 15:29:15.238458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.783 [2024-11-20 15:29:15.238620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.783 [2024-11-20 15:29:15.238639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:28.783 [2024-11-20 15:29:15.242712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.783 [2024-11-20 15:29:15.242899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.783 [2024-11-20 15:29:15.242918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:28.783 [2024-11-20 15:29:15.246955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.783 [2024-11-20 15:29:15.247181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.783 [2024-11-20 15:29:15.247200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.783 [2024-11-20 15:29:15.251309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.783 [2024-11-20 15:29:15.251483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.783 [2024-11-20 15:29:15.251503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:28.783 [2024-11-20 15:29:15.255554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:28.783 [2024-11-20 15:29:15.255755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.783 [2024-11-20 15:29:15.255775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.259901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.260080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.260098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.264086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.264265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.264284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.268272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.268430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.268449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.272474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.272651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.272670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.276723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.276930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.276949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.280891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.281069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.281087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.284999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.285174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.285193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.289141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.289308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.289327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.293327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.293491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.293510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:29.043 [2024-11-20 15:29:15.297578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.297785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.297804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:29.043 7347.00 IOPS, 918.38 MiB/s [2024-11-20T15:29:15.525Z] [2024-11-20 15:29:15.303032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1196210) with pdu=0x200016eff3c8 00:21:29.043 [2024-11-20 15:29:15.303226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.043 [2024-11-20 15:29:15.303245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:29.043 00:21:29.043 Latency(us) 00:21:29.043 [2024-11-20T15:29:15.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.043 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:29.043 nvme0n1 : 2.00 7343.36 917.92 0.00 0.00 2175.32 1829.22 9843.56 00:21:29.043 [2024-11-20T15:29:15.525Z] =================================================================================================================== 00:21:29.043 [2024-11-20T15:29:15.525Z] Total : 7343.36 917.92 0.00 0.00 2175.32 1829.22 9843.56 00:21:29.043 { 00:21:29.043 "results": [ 00:21:29.043 { 00:21:29.043 "job": "nvme0n1", 00:21:29.043 "core_mask": "0x2", 00:21:29.043 "workload": "randwrite", 00:21:29.043 "status": "finished", 00:21:29.043 "queue_depth": 16, 00:21:29.043 "io_size": 131072, 00:21:29.043 "runtime": 2.00317, 00:21:29.043 "iops": 7343.360773174519, 00:21:29.043 "mibps": 917.9200966468148, 00:21:29.043 "io_failed": 0, 00:21:29.043 "io_timeout": 0, 00:21:29.043 "avg_latency_us": 2175.320998473841, 00:21:29.043 "min_latency_us": 1829.2176706827308, 00:21:29.043 "max_latency_us": 9843.55983935743 00:21:29.043 } 00:21:29.043 ], 00:21:29.043 "core_count": 1 00:21:29.043 } 00:21:29.043 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:29.043 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:29.043 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:29.043 | .driver_specific 00:21:29.043 | .nvme_error 00:21:29.043 | .status_code 00:21:29.043 | .command_transient_transport_error' 00:21:29.043 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 475 > 0 )) 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95517 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95517 ']' 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95517 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95517 00:21:29.303 killing process with pid 95517 00:21:29.303 Received shutdown signal, test time was about 2.000000 seconds 00:21:29.303 00:21:29.303 Latency(us) 00:21:29.303 [2024-11-20T15:29:15.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.303 [2024-11-20T15:29:15.785Z] =================================================================================================================== 00:21:29.303 [2024-11-20T15:29:15.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95517' 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95517 00:21:29.303 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95517 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95203 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95203 ']' 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95203 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95203 00:21:29.563 killing process with pid 95203 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95203' 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95203 00:21:29.563 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95203 00:21:29.823 00:21:29.823 real 0m18.074s 00:21:29.823 user 0m33.190s 00:21:29.823 sys 0m5.522s 00:21:29.823 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.823 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:29.823 ************************************ 00:21:29.823 END TEST nvmf_digest_error 00:21:29.823 ************************************ 00:21:29.823 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:29.823 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:29.823 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:29.823 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.083 rmmod nvme_tcp 00:21:30.083 rmmod nvme_fabrics 00:21:30.083 rmmod nvme_keyring 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 95203 ']' 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 95203 00:21:30.083 Process with pid 95203 is not found 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 95203 ']' 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 95203 00:21:30.083 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (95203) - No such process 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 95203 is not found' 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:30.083 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:30.343 00:21:30.343 real 0m38.126s 00:21:30.343 user 1m8.118s 00:21:30.343 sys 0m11.484s 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:30.343 ************************************ 00:21:30.343 END TEST nvmf_digest 00:21:30.343 ************************************ 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.343 ************************************ 00:21:30.343 START TEST nvmf_mdns_discovery 00:21:30.343 ************************************ 00:21:30.343 15:29:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:21:30.603 * Looking for test storage... 00:21:30.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:30.603 15:29:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:30.603 15:29:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:21:30.603 15:29:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:30.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.603 --rc genhtml_branch_coverage=1 00:21:30.603 --rc genhtml_function_coverage=1 00:21:30.603 --rc genhtml_legend=1 00:21:30.603 --rc geninfo_all_blocks=1 00:21:30.603 --rc geninfo_unexecuted_blocks=1 00:21:30.603 00:21:30.603 ' 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:30.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.603 --rc genhtml_branch_coverage=1 00:21:30.603 --rc genhtml_function_coverage=1 00:21:30.603 --rc genhtml_legend=1 00:21:30.603 --rc geninfo_all_blocks=1 00:21:30.603 --rc geninfo_unexecuted_blocks=1 00:21:30.603 00:21:30.603 ' 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:30.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.603 --rc genhtml_branch_coverage=1 00:21:30.603 --rc genhtml_function_coverage=1 00:21:30.603 --rc genhtml_legend=1 00:21:30.603 --rc geninfo_all_blocks=1 00:21:30.603 --rc geninfo_unexecuted_blocks=1 00:21:30.603 00:21:30.603 ' 00:21:30.603 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:30.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.604 --rc genhtml_branch_coverage=1 00:21:30.604 --rc genhtml_function_coverage=1 00:21:30.604 --rc genhtml_legend=1 00:21:30.604 --rc geninfo_all_blocks=1 00:21:30.604 --rc geninfo_unexecuted_blocks=1 00:21:30.604 00:21:30.604 ' 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.604 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:21:30.863 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:30.864 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:30.864 Cannot find device "nvmf_init_br" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:30.864 Cannot find device "nvmf_init_br2" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:30.864 Cannot find device "nvmf_tgt_br" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:30.864 Cannot find device "nvmf_tgt_br2" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:30.864 Cannot find device "nvmf_init_br" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:30.864 Cannot find device "nvmf_init_br2" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:30.864 Cannot find device "nvmf_tgt_br" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:30.864 Cannot find device "nvmf_tgt_br2" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:30.864 Cannot find device "nvmf_br" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:30.864 Cannot find device "nvmf_init_if" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:30.864 Cannot find device "nvmf_init_if2" 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:30.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:30.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:30.864 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:31.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:21:31.123 00:21:31.123 --- 10.0.0.3 ping statistics --- 00:21:31.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.123 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:31.123 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:31.123 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:31.123 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:21:31.123 00:21:31.123 --- 10.0.0.4 ping statistics --- 00:21:31.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.123 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:31.382 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:21:31.383 00:21:31.383 --- 10.0.0.1 ping statistics --- 00:21:31.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.383 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:31.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:21:31.383 00:21:31.383 --- 10.0.0.2 ping statistics --- 00:21:31.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.383 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=95872 00:21:31.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 95872 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95872 ']' 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.383 15:29:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.383 [2024-11-20 15:29:17.729224] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:31.383 [2024-11-20 15:29:17.729294] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.642 [2024-11-20 15:29:17.880337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.642 [2024-11-20 15:29:17.938746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.642 [2024-11-20 15:29:17.938987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.642 [2024-11-20 15:29:17.939004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.642 [2024-11-20 15:29:17.939013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.642 [2024-11-20 15:29:17.939020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.642 [2024-11-20 15:29:17.939393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.210 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 [2024-11-20 15:29:18.822530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 [2024-11-20 15:29:18.834685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 null0 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 null1 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 null2 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 null3 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95928 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95928 /tmp/host.sock 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95928 ']' 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:32.470 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.470 15:29:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.729 [2024-11-20 15:29:18.955088] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:32.729 [2024-11-20 15:29:18.955283] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95928 ] 00:21:32.729 [2024-11-20 15:29:19.105215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.729 [2024-11-20 15:29:19.176348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95957 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:21:33.665 15:29:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:21:33.665 Process 1070 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:21:33.665 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:21:33.665 Successfully dropped root privileges. 00:21:33.665 avahi-daemon 0.8 starting up. 00:21:33.665 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:21:34.603 Successfully called chroot(). 00:21:34.603 Successfully dropped remaining capabilities. 00:21:34.603 No service file found in /etc/avahi/services. 00:21:34.603 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:21:34.603 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:21:34.603 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:21:34.603 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:21:34.603 Network interface enumeration completed. 00:21:34.603 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:21:34.603 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:21:34.603 Registering new address record for fe80::b06d:e1ff:fe63:97e4 on nvmf_tgt_if.*. 00:21:34.603 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:21:34.603 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2347711815. 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:21:34.603 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.604 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:34.604 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.604 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.604 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:34.604 15:29:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:34.604 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.863 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.864 [2024-11-20 15:29:21.289453] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.864 [2024-11-20 15:29:21.327105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.864 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.123 15:29:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:21:36.060 [2024-11-20 15:29:22.188003] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:36.320 [2024-11-20 15:29:22.587361] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:36.320 [2024-11-20 15:29:22.587387] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:36.320 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:36.320 cookie is 0 00:21:36.320 is_local: 1 00:21:36.320 our_own: 0 00:21:36.320 wide_area: 0 00:21:36.320 multicast: 1 00:21:36.320 cached: 1 00:21:36.320 [2024-11-20 15:29:22.687193] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:36.320 [2024-11-20 15:29:22.687211] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:36.320 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:36.320 cookie is 0 00:21:36.320 is_local: 1 00:21:36.320 our_own: 0 00:21:36.320 wide_area: 0 00:21:36.320 multicast: 1 00:21:36.320 cached: 1 00:21:37.258 [2024-11-20 15:29:23.586782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.258 [2024-11-20 15:29:23.586843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1414710 with addr=10.0.0.4, port=8009 00:21:37.258 [2024-11-20 15:29:23.586902] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:37.258 [2024-11-20 15:29:23.586917] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:37.258 [2024-11-20 15:29:23.586927] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:37.258 [2024-11-20 15:29:23.690625] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:37.258 [2024-11-20 15:29:23.690650] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:37.258 [2024-11-20 15:29:23.690665] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:37.517 [2024-11-20 15:29:23.777601] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:21:37.517 [2024-11-20 15:29:23.831909] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:37.517 [2024-11-20 15:29:23.832850] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1449e90:1 started. 00:21:37.517 [2024-11-20 15:29:23.834882] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:37.517 [2024-11-20 15:29:23.834903] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:37.517 [2024-11-20 15:29:23.838989] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1449e90 was disconnected and freed. delete nvme_qpair. 00:21:38.453 [2024-11-20 15:29:24.584963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.454 [2024-11-20 15:29:24.585133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1414350 with addr=10.0.0.4, port=8009 00:21:38.454 [2024-11-20 15:29:24.585261] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:38.454 [2024-11-20 15:29:24.585293] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:38.454 [2024-11-20 15:29:24.585321] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:39.389 [2024-11-20 15:29:25.583393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.389 [2024-11-20 15:29:25.583579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1432aa0 with addr=10.0.0.4, port=8009 00:21:39.389 [2024-11-20 15:29:25.583750] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:39.389 [2024-11-20 15:29:25.583836] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:39.389 [2024-11-20 15:29:25.583852] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:39.955 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:39.955 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:39.955 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.955 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.224 [2024-11-20 15:29:26.436518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:21:40.224 [2024-11-20 15:29:26.440489] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:40.224 [2024-11-20 15:29:26.440521] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:40.224 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.224 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:40.224 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.224 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.224 [2024-11-20 15:29:26.448385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:21:40.224 [2024-11-20 15:29:26.449438] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:40.224 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.224 15:29:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:21:40.224 [2024-11-20 15:29:26.580330] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:40.224 [2024-11-20 15:29:26.580361] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:40.224 [2024-11-20 15:29:26.586064] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:40.225 [2024-11-20 15:29:26.586088] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:40.225 [2024-11-20 15:29:26.586099] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:40.225 [2024-11-20 15:29:26.666055] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:40.225 [2024-11-20 15:29:26.672016] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:21:40.511 [2024-11-20 15:29:26.726268] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:21:40.511 [2024-11-20 15:29:26.726855] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x14468b0:1 started. 00:21:40.511 [2024-11-20 15:29:26.728365] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:40.511 [2024-11-20 15:29:26.728389] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:40.511 [2024-11-20 15:29:26.734656] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x14468b0 was disconnected and freed. delete nvme_qpair. 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:41.078 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:41.078 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:41.078 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:41.078 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:41.078 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:41.078 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:41.078 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:41.078 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:41.079 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:21:41.338 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:41.339 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.339 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.339 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:41.339 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:41.339 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:41.339 [2024-11-20 15:29:27.778980] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:41.339 [2024-11-20 15:29:27.779009] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:41.339 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:41.339 cookie is 0 00:21:41.339 is_local: 1 00:21:41.339 our_own: 0 00:21:41.339 wide_area: 0 00:21:41.339 multicast: 1 00:21:41.339 cached: 1 00:21:41.339 [2024-11-20 15:29:27.779023] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:41.339 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.339 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:21:41.339 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.598 [2024-11-20 15:29:27.878094] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x144f5f0:1 started. 00:21:41.598 [2024-11-20 15:29:27.883307] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x144f5f0 was disconnected and freed. delete nvme_qpair. 00:21:41.598 [2024-11-20 15:29:27.885523] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x14456d0:1 started. 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.598 15:29:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:21:41.598 [2024-11-20 15:29:27.892875] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x14456d0 was disconnected and freed. delete nvme_qpair. 00:21:41.598 [2024-11-20 15:29:27.980116] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:41.598 [2024-11-20 15:29:27.980255] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:41.598 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:41.598 cookie is 0 00:21:41.598 is_local: 1 00:21:41.598 our_own: 0 00:21:41.598 wide_area: 0 00:21:41.598 multicast: 1 00:21:41.598 cached: 1 00:21:41.598 [2024-11-20 15:29:27.980273] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:42.533 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:21:42.533 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.534 15:29:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.792 [2024-11-20 15:29:29.021781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:42.792 [2024-11-20 15:29:29.022793] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:42.792 [2024-11-20 15:29:29.022822] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:42.792 [2024-11-20 15:29:29.022853] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:42.792 [2024-11-20 15:29:29.022864] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.792 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.793 [2024-11-20 15:29:29.033683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:21:42.793 [2024-11-20 15:29:29.034772] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:42.793 [2024-11-20 15:29:29.034809] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:42.793 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.793 15:29:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:21:42.793 [2024-11-20 15:29:29.168724] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:21:42.793 [2024-11-20 15:29:29.169151] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:21:42.793 [2024-11-20 15:29:29.227079] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:21:42.793 [2024-11-20 15:29:29.227138] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:42.793 [2024-11-20 15:29:29.227148] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:42.793 [2024-11-20 15:29:29.227154] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:42.793 [2024-11-20 15:29:29.227170] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:42.793 [2024-11-20 15:29:29.227316] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:21:42.793 [2024-11-20 15:29:29.227340] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:42.793 [2024-11-20 15:29:29.227346] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:42.793 [2024-11-20 15:29:29.227351] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:42.793 [2024-11-20 15:29:29.227362] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:43.051 [2024-11-20 15:29:29.273708] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:43.051 [2024-11-20 15:29:29.273727] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:43.051 [2024-11-20 15:29:29.273759] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:43.051 [2024-11-20 15:29:29.273765] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:43.618 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:21:43.618 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.618 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:43.618 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:43.619 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.877 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.878 [2024-11-20 15:29:30.312751] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:43.878 [2024-11-20 15:29:30.312793] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:43.878 [2024-11-20 15:29:30.312825] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:43.878 [2024-11-20 15:29:30.312836] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:43.878 [2024-11-20 15:29:30.313089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.878 [2024-11-20 15:29:30.313119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.878 [2024-11-20 15:29:30.313132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.878 [2024-11-20 15:29:30.313141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.878 [2024-11-20 15:29:30.313152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.878 [2024-11-20 15:29:30.313161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.878 [2024-11-20 15:29:30.313170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.878 [2024-11-20 15:29:30.313179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.878 [2024-11-20 15:29:30.313188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.878 [2024-11-20 15:29:30.323020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:43.878 [2024-11-20 15:29:30.324776] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:43.878 [2024-11-20 15:29:30.324861] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.878 15:29:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:21:43.878 [2024-11-20 15:29:30.332382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.878 [2024-11-20 15:29:30.332415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.878 [2024-11-20 15:29:30.332426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.878 [2024-11-20 15:29:30.332435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.878 [2024-11-20 15:29:30.332445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.878 [2024-11-20 15:29:30.332455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.878 [2024-11-20 15:29:30.332465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.878 [2024-11-20 15:29:30.332474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.878 [2024-11-20 15:29:30.332483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:43.878 [2024-11-20 15:29:30.333024] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:43.878 [2024-11-20 15:29:30.333041] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:43.878 [2024-11-20 15:29:30.333050] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:43.878 [2024-11-20 15:29:30.333058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:43.878 [2024-11-20 15:29:30.333083] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:43.878 [2024-11-20 15:29:30.333167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.878 [2024-11-20 15:29:30.333182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:43.878 [2024-11-20 15:29:30.333193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:43.878 [2024-11-20 15:29:30.333209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:43.878 [2024-11-20 15:29:30.333231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:43.878 [2024-11-20 15:29:30.333241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:43.878 [2024-11-20 15:29:30.333253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:43.878 [2024-11-20 15:29:30.333261] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:43.878 [2024-11-20 15:29:30.333267] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:43.878 [2024-11-20 15:29:30.333273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:43.878 [2024-11-20 15:29:30.342332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:43.878 [2024-11-20 15:29:30.343074] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:43.878 [2024-11-20 15:29:30.343089] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:43.878 [2024-11-20 15:29:30.343095] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:43.878 [2024-11-20 15:29:30.343100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:43.878 [2024-11-20 15:29:30.343123] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:43.878 [2024-11-20 15:29:30.343175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.878 [2024-11-20 15:29:30.343189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:43.878 [2024-11-20 15:29:30.343199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:43.878 [2024-11-20 15:29:30.343211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:43.878 [2024-11-20 15:29:30.343224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:43.878 [2024-11-20 15:29:30.343233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:43.878 [2024-11-20 15:29:30.343242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:43.878 [2024-11-20 15:29:30.343250] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:43.878 [2024-11-20 15:29:30.343255] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:43.878 [2024-11-20 15:29:30.343260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:43.878 [2024-11-20 15:29:30.352321] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:43.878 [2024-11-20 15:29:30.352341] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:43.878 [2024-11-20 15:29:30.352347] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:43.878 [2024-11-20 15:29:30.352353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:43.878 [2024-11-20 15:29:30.352378] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:43.878 [2024-11-20 15:29:30.352423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.878 [2024-11-20 15:29:30.352436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:43.878 [2024-11-20 15:29:30.352446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:43.878 [2024-11-20 15:29:30.352459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:43.878 [2024-11-20 15:29:30.352471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:43.878 [2024-11-20 15:29:30.352480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:43.878 [2024-11-20 15:29:30.352490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:43.878 [2024-11-20 15:29:30.352497] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:43.879 [2024-11-20 15:29:30.352503] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:43.879 [2024-11-20 15:29:30.352507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:43.879 [2024-11-20 15:29:30.353114] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:43.879 [2024-11-20 15:29:30.353129] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:43.879 [2024-11-20 15:29:30.353135] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:43.879 [2024-11-20 15:29:30.353140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:43.879 [2024-11-20 15:29:30.353155] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:43.879 [2024-11-20 15:29:30.353190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.879 [2024-11-20 15:29:30.353203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:43.879 [2024-11-20 15:29:30.353211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:43.879 [2024-11-20 15:29:30.353223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:43.879 [2024-11-20 15:29:30.353242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:43.879 [2024-11-20 15:29:30.353251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:43.879 [2024-11-20 15:29:30.353260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:43.879 [2024-11-20 15:29:30.353267] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:43.879 [2024-11-20 15:29:30.353272] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:43.879 [2024-11-20 15:29:30.353277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.140 [2024-11-20 15:29:30.362369] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.141 [2024-11-20 15:29:30.362387] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.141 [2024-11-20 15:29:30.362392] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.362397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.141 [2024-11-20 15:29:30.362419] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.362456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.141 [2024-11-20 15:29:30.362469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.141 [2024-11-20 15:29:30.362478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.141 [2024-11-20 15:29:30.362490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.141 [2024-11-20 15:29:30.362502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.141 [2024-11-20 15:29:30.362511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.141 [2024-11-20 15:29:30.362520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.141 [2024-11-20 15:29:30.362527] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.141 [2024-11-20 15:29:30.362532] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.141 [2024-11-20 15:29:30.362537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.141 [2024-11-20 15:29:30.363147] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.141 [2024-11-20 15:29:30.363163] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.141 [2024-11-20 15:29:30.363168] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.363173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.141 [2024-11-20 15:29:30.363189] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.363222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.141 [2024-11-20 15:29:30.363235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.141 [2024-11-20 15:29:30.363244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.141 [2024-11-20 15:29:30.363256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.141 [2024-11-20 15:29:30.363268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.141 [2024-11-20 15:29:30.363276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.141 [2024-11-20 15:29:30.363286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.141 [2024-11-20 15:29:30.363293] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.141 [2024-11-20 15:29:30.363299] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.141 [2024-11-20 15:29:30.363303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.141 [2024-11-20 15:29:30.372412] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.141 [2024-11-20 15:29:30.372435] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.141 [2024-11-20 15:29:30.372440] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.372446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.141 [2024-11-20 15:29:30.372469] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.372509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.141 [2024-11-20 15:29:30.372523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.141 [2024-11-20 15:29:30.372532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.141 [2024-11-20 15:29:30.372545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.141 [2024-11-20 15:29:30.372557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.141 [2024-11-20 15:29:30.372566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.141 [2024-11-20 15:29:30.372576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.141 [2024-11-20 15:29:30.372583] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.141 [2024-11-20 15:29:30.372589] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.141 [2024-11-20 15:29:30.372593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.141 [2024-11-20 15:29:30.373180] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.141 [2024-11-20 15:29:30.373196] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.141 [2024-11-20 15:29:30.373201] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.373206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.141 [2024-11-20 15:29:30.373221] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.373264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.141 [2024-11-20 15:29:30.373277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.141 [2024-11-20 15:29:30.373286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.141 [2024-11-20 15:29:30.373298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.141 [2024-11-20 15:29:30.373311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.141 [2024-11-20 15:29:30.373319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.141 [2024-11-20 15:29:30.373328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.141 [2024-11-20 15:29:30.373335] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.141 [2024-11-20 15:29:30.373341] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.141 [2024-11-20 15:29:30.373345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.141 [2024-11-20 15:29:30.382460] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.141 [2024-11-20 15:29:30.382479] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.141 [2024-11-20 15:29:30.382484] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.382490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.141 [2024-11-20 15:29:30.382512] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.382552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.141 [2024-11-20 15:29:30.382566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.141 [2024-11-20 15:29:30.382575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.141 [2024-11-20 15:29:30.382587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.141 [2024-11-20 15:29:30.382599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.141 [2024-11-20 15:29:30.382607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.141 [2024-11-20 15:29:30.382617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.141 [2024-11-20 15:29:30.382624] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.141 [2024-11-20 15:29:30.382630] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.141 [2024-11-20 15:29:30.382635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.141 [2024-11-20 15:29:30.383212] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.141 [2024-11-20 15:29:30.383228] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.141 [2024-11-20 15:29:30.383234] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.141 [2024-11-20 15:29:30.383239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.142 [2024-11-20 15:29:30.383254] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.383288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.142 [2024-11-20 15:29:30.383301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.142 [2024-11-20 15:29:30.383309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.142 [2024-11-20 15:29:30.383321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.142 [2024-11-20 15:29:30.383333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.142 [2024-11-20 15:29:30.383341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.142 [2024-11-20 15:29:30.383350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.142 [2024-11-20 15:29:30.383358] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.142 [2024-11-20 15:29:30.383363] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.142 [2024-11-20 15:29:30.383368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.142 [2024-11-20 15:29:30.392503] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.142 [2024-11-20 15:29:30.392520] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.142 [2024-11-20 15:29:30.392526] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.392531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.142 [2024-11-20 15:29:30.392550] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.392586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.142 [2024-11-20 15:29:30.392599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.142 [2024-11-20 15:29:30.392608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.142 [2024-11-20 15:29:30.392620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.142 [2024-11-20 15:29:30.392632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.142 [2024-11-20 15:29:30.392640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.142 [2024-11-20 15:29:30.392649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.142 [2024-11-20 15:29:30.392657] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.142 [2024-11-20 15:29:30.392662] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.142 [2024-11-20 15:29:30.392667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.142 [2024-11-20 15:29:30.393245] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.142 [2024-11-20 15:29:30.393261] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.142 [2024-11-20 15:29:30.393266] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.393271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.142 [2024-11-20 15:29:30.393286] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.393318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.142 [2024-11-20 15:29:30.393330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.142 [2024-11-20 15:29:30.393339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.142 [2024-11-20 15:29:30.393351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.142 [2024-11-20 15:29:30.393363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.142 [2024-11-20 15:29:30.393371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.142 [2024-11-20 15:29:30.393380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.142 [2024-11-20 15:29:30.393387] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.142 [2024-11-20 15:29:30.393392] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.142 [2024-11-20 15:29:30.393397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.142 [2024-11-20 15:29:30.402543] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.142 [2024-11-20 15:29:30.402561] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.142 [2024-11-20 15:29:30.402567] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.402572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.142 [2024-11-20 15:29:30.402593] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.402628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.142 [2024-11-20 15:29:30.402641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.142 [2024-11-20 15:29:30.402650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.142 [2024-11-20 15:29:30.402662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.142 [2024-11-20 15:29:30.402682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.142 [2024-11-20 15:29:30.402691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.142 [2024-11-20 15:29:30.402700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.142 [2024-11-20 15:29:30.402708] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.142 [2024-11-20 15:29:30.402713] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.142 [2024-11-20 15:29:30.402718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.142 [2024-11-20 15:29:30.403277] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.142 [2024-11-20 15:29:30.403292] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.142 [2024-11-20 15:29:30.403298] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.403303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.142 [2024-11-20 15:29:30.403318] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.403350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.142 [2024-11-20 15:29:30.403363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.142 [2024-11-20 15:29:30.403372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.142 [2024-11-20 15:29:30.403383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.142 [2024-11-20 15:29:30.403395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.142 [2024-11-20 15:29:30.403403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.142 [2024-11-20 15:29:30.403412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.142 [2024-11-20 15:29:30.403419] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.142 [2024-11-20 15:29:30.403424] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.142 [2024-11-20 15:29:30.403429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.142 [2024-11-20 15:29:30.412586] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.142 [2024-11-20 15:29:30.412606] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.142 [2024-11-20 15:29:30.412611] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.412616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.142 [2024-11-20 15:29:30.412656] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.142 [2024-11-20 15:29:30.412702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.143 [2024-11-20 15:29:30.412715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.143 [2024-11-20 15:29:30.412725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.143 [2024-11-20 15:29:30.412737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.143 [2024-11-20 15:29:30.412795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.143 [2024-11-20 15:29:30.412806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.143 [2024-11-20 15:29:30.412815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.143 [2024-11-20 15:29:30.412822] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.143 [2024-11-20 15:29:30.412828] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.143 [2024-11-20 15:29:30.412833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.143 [2024-11-20 15:29:30.413308] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.143 [2024-11-20 15:29:30.413325] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.143 [2024-11-20 15:29:30.413331] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.413336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.143 [2024-11-20 15:29:30.413351] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.413385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.143 [2024-11-20 15:29:30.413398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.143 [2024-11-20 15:29:30.413407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.143 [2024-11-20 15:29:30.413419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.143 [2024-11-20 15:29:30.413431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.143 [2024-11-20 15:29:30.413439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.143 [2024-11-20 15:29:30.413448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.143 [2024-11-20 15:29:30.413455] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.143 [2024-11-20 15:29:30.413460] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.143 [2024-11-20 15:29:30.413465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.143 [2024-11-20 15:29:30.422647] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.143 [2024-11-20 15:29:30.422668] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.143 [2024-11-20 15:29:30.422681] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.422686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.143 [2024-11-20 15:29:30.422709] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.422748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.143 [2024-11-20 15:29:30.422761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.143 [2024-11-20 15:29:30.422770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.143 [2024-11-20 15:29:30.422783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.143 [2024-11-20 15:29:30.422807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.143 [2024-11-20 15:29:30.422816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.143 [2024-11-20 15:29:30.422825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.143 [2024-11-20 15:29:30.422833] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.143 [2024-11-20 15:29:30.422838] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.143 [2024-11-20 15:29:30.422843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.143 [2024-11-20 15:29:30.423343] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.143 [2024-11-20 15:29:30.423358] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.143 [2024-11-20 15:29:30.423364] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.423369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.143 [2024-11-20 15:29:30.423384] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.423416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.143 [2024-11-20 15:29:30.423428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.143 [2024-11-20 15:29:30.423437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.143 [2024-11-20 15:29:30.423449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.143 [2024-11-20 15:29:30.423460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.143 [2024-11-20 15:29:30.423469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.143 [2024-11-20 15:29:30.423477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.143 [2024-11-20 15:29:30.423485] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.143 [2024-11-20 15:29:30.423490] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.143 [2024-11-20 15:29:30.423494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.143 [2024-11-20 15:29:30.432721] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.143 [2024-11-20 15:29:30.432743] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.143 [2024-11-20 15:29:30.432748] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.432754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.143 [2024-11-20 15:29:30.432788] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.432831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.143 [2024-11-20 15:29:30.432845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.143 [2024-11-20 15:29:30.432854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.143 [2024-11-20 15:29:30.432882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.143 [2024-11-20 15:29:30.432895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.143 [2024-11-20 15:29:30.432904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.143 [2024-11-20 15:29:30.432912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.143 [2024-11-20 15:29:30.432920] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.143 [2024-11-20 15:29:30.432925] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.143 [2024-11-20 15:29:30.432930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.143 [2024-11-20 15:29:30.433376] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.143 [2024-11-20 15:29:30.433392] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.143 [2024-11-20 15:29:30.433397] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.433402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.143 [2024-11-20 15:29:30.433417] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.143 [2024-11-20 15:29:30.433450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.143 [2024-11-20 15:29:30.433462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.143 [2024-11-20 15:29:30.433471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.143 [2024-11-20 15:29:30.433484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.143 [2024-11-20 15:29:30.433495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.143 [2024-11-20 15:29:30.433503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.143 [2024-11-20 15:29:30.433512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.144 [2024-11-20 15:29:30.433519] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.144 [2024-11-20 15:29:30.433524] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.144 [2024-11-20 15:29:30.433529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.144 [2024-11-20 15:29:30.442781] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.144 [2024-11-20 15:29:30.442801] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.144 [2024-11-20 15:29:30.442806] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.144 [2024-11-20 15:29:30.442811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.144 [2024-11-20 15:29:30.442833] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.144 [2024-11-20 15:29:30.442887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.144 [2024-11-20 15:29:30.442900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.144 [2024-11-20 15:29:30.442910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.144 [2024-11-20 15:29:30.442922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.144 [2024-11-20 15:29:30.442934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.144 [2024-11-20 15:29:30.442942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.144 [2024-11-20 15:29:30.442951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.144 [2024-11-20 15:29:30.442959] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.144 [2024-11-20 15:29:30.442964] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.144 [2024-11-20 15:29:30.442969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.144 [2024-11-20 15:29:30.443409] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.144 [2024-11-20 15:29:30.443424] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.144 [2024-11-20 15:29:30.443429] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.144 [2024-11-20 15:29:30.443434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.144 [2024-11-20 15:29:30.443449] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.144 [2024-11-20 15:29:30.443481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.144 [2024-11-20 15:29:30.443493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.144 [2024-11-20 15:29:30.443502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.144 [2024-11-20 15:29:30.443514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.144 [2024-11-20 15:29:30.443526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.144 [2024-11-20 15:29:30.443534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.144 [2024-11-20 15:29:30.443543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.144 [2024-11-20 15:29:30.443550] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.144 [2024-11-20 15:29:30.443555] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.144 [2024-11-20 15:29:30.443560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.144 [2024-11-20 15:29:30.452825] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:44.144 [2024-11-20 15:29:30.452845] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:44.144 [2024-11-20 15:29:30.452850] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:44.144 [2024-11-20 15:29:30.452855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:44.144 [2024-11-20 15:29:30.452877] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:44.144 [2024-11-20 15:29:30.452916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.144 [2024-11-20 15:29:30.452929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14344c0 with addr=10.0.0.4, port=4420 00:21:44.144 [2024-11-20 15:29:30.452938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14344c0 is same with the state(6) to be set 00:21:44.144 [2024-11-20 15:29:30.452951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14344c0 (9): Bad file descriptor 00:21:44.144 [2024-11-20 15:29:30.452963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:44.144 [2024-11-20 15:29:30.452971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:44.144 [2024-11-20 15:29:30.452981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:44.144 [2024-11-20 15:29:30.452988] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:44.144 [2024-11-20 15:29:30.452993] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:44.144 [2024-11-20 15:29:30.452998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:44.144 [2024-11-20 15:29:30.453441] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:44.144 [2024-11-20 15:29:30.453457] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:44.144 [2024-11-20 15:29:30.453462] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:44.144 [2024-11-20 15:29:30.453467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:44.144 [2024-11-20 15:29:30.453482] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.144 [2024-11-20 15:29:30.453513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.144 [2024-11-20 15:29:30.453525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14267f0 with addr=10.0.0.3, port=4420 00:21:44.144 [2024-11-20 15:29:30.453534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14267f0 is same with the state(6) to be set 00:21:44.144 [2024-11-20 15:29:30.453545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14267f0 (9): Bad file descriptor 00:21:44.144 [2024-11-20 15:29:30.453557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.144 [2024-11-20 15:29:30.453566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.144 [2024-11-20 15:29:30.453574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.144 [2024-11-20 15:29:30.453582] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.144 [2024-11-20 15:29:30.453587] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.144 [2024-11-20 15:29:30.453592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.144 [2024-11-20 15:29:30.455822] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:21:44.144 [2024-11-20 15:29:30.455850] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:44.144 [2024-11-20 15:29:30.455870] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:44.144 [2024-11-20 15:29:30.456805] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:21:44.144 [2024-11-20 15:29:30.456825] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:44.144 [2024-11-20 15:29:30.456837] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:44.144 [2024-11-20 15:29:30.541743] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:44.144 [2024-11-20 15:29:30.542731] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.079 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.338 15:29:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:21:45.338 [2024-11-20 15:29:31.674124] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.273 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.533 [2024-11-20 15:29:32.810138] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:21:46.533 2024/11/20 15:29:32 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:46.533 request: 00:21:46.533 { 00:21:46.533 "method": "bdev_nvme_start_mdns_discovery", 00:21:46.533 "params": { 00:21:46.533 "name": "mdns", 00:21:46.533 "svcname": "_nvme-disc._http", 00:21:46.533 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:46.533 } 00:21:46.533 } 00:21:46.533 Got JSON-RPC error response 00:21:46.533 GoRPCClient: error on JSON-RPC call 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.533 15:29:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:21:47.101 [2024-11-20 15:29:33.393948] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:47.101 [2024-11-20 15:29:33.493775] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:47.359 [2024-11-20 15:29:33.593623] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:47.359 [2024-11-20 15:29:33.593646] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:47.359 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:47.359 cookie is 0 00:21:47.359 is_local: 1 00:21:47.359 our_own: 0 00:21:47.359 wide_area: 0 00:21:47.359 multicast: 1 00:21:47.359 cached: 1 00:21:47.359 [2024-11-20 15:29:33.693458] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:47.359 [2024-11-20 15:29:33.693483] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:47.359 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:47.359 cookie is 0 00:21:47.359 is_local: 1 00:21:47.359 our_own: 0 00:21:47.359 wide_area: 0 00:21:47.359 multicast: 1 00:21:47.359 cached: 1 00:21:47.359 [2024-11-20 15:29:33.693495] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:47.359 [2024-11-20 15:29:33.793299] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:47.359 [2024-11-20 15:29:33.793321] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:47.359 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:47.359 cookie is 0 00:21:47.359 is_local: 1 00:21:47.359 our_own: 0 00:21:47.359 wide_area: 0 00:21:47.359 multicast: 1 00:21:47.359 cached: 1 00:21:47.618 [2024-11-20 15:29:33.893136] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:47.618 [2024-11-20 15:29:33.893158] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:47.618 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:47.618 cookie is 0 00:21:47.618 is_local: 1 00:21:47.618 our_own: 0 00:21:47.618 wide_area: 0 00:21:47.618 multicast: 1 00:21:47.618 cached: 1 00:21:47.618 [2024-11-20 15:29:33.893168] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:48.186 [2024-11-20 15:29:34.604501] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:48.186 [2024-11-20 15:29:34.604548] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:48.186 [2024-11-20 15:29:34.604583] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:48.445 [2024-11-20 15:29:34.690468] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:21:48.445 [2024-11-20 15:29:34.748762] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:21:48.445 [2024-11-20 15:29:34.749404] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x144d3f0:1 started. 00:21:48.445 [2024-11-20 15:29:34.751119] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:48.445 [2024-11-20 15:29:34.751147] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:48.445 [2024-11-20 15:29:34.752999] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x144d3f0 was disconnected and freed. delete nvme_qpair. 00:21:48.446 [2024-11-20 15:29:34.803762] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:48.446 [2024-11-20 15:29:34.803784] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:48.446 [2024-11-20 15:29:34.803797] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:48.446 [2024-11-20 15:29:34.889735] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:21:48.705 [2024-11-20 15:29:34.947930] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:21:48.705 [2024-11-20 15:29:34.948418] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1431700:1 started. 00:21:48.705 [2024-11-20 15:29:34.949831] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:48.705 [2024-11-20 15:29:34.949856] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:48.705 [2024-11-20 15:29:34.952636] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1431700 was disconnected and freed. delete nvme_qpair. 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.998 15:29:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.998 [2024-11-20 15:29:38.003074] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:21:51.998 2024/11/20 15:29:38 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:51.998 request: 00:21:51.998 { 00:21:51.998 "method": "bdev_nvme_start_mdns_discovery", 00:21:51.998 "params": { 00:21:51.998 "name": "cdc", 00:21:51.998 "svcname": "_nvme-disc._tcp", 00:21:51.998 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:51.998 } 00:21:51.998 } 00:21:51.998 Got JSON-RPC error response 00:21:51.998 GoRPCClient: error on JSON-RPC call 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:51.998 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:51.999 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:51.999 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:51.999 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:51.999 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:51.999 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:51.999 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:51.999 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.999 [2024-11-20 15:29:38.186191] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.999 15:29:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:52.989 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:52.989 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:52.989 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95928 00:21:52.989 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95928 00:21:53.247 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95957 00:21:53.247 Got SIGTERM, quitting. 00:21:53.247 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:21:53.247 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:21:53.247 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.247 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:21:53.247 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:21:53.247 avahi-daemon 0.8 exiting. 00:21:53.247 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.247 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:21:53.247 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.247 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.248 rmmod nvme_tcp 00:21:53.248 rmmod nvme_fabrics 00:21:53.248 rmmod nvme_keyring 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 95872 ']' 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 95872 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 95872 ']' 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 95872 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95872 00:21:53.248 killing process with pid 95872 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95872' 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 95872 00:21:53.248 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 95872 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:53.505 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:53.763 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:53.763 15:29:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.763 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:21:53.763 00:21:53.764 real 0m23.394s 00:21:53.764 user 0m43.626s 00:21:53.764 sys 0m3.545s 00:21:53.764 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.764 15:29:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.764 ************************************ 00:21:53.764 END TEST nvmf_mdns_discovery 00:21:53.764 ************************************ 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.023 ************************************ 00:21:54.023 START TEST nvmf_host_multipath 00:21:54.023 ************************************ 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:54.023 * Looking for test storage... 00:21:54.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:54.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.023 --rc genhtml_branch_coverage=1 00:21:54.023 --rc genhtml_function_coverage=1 00:21:54.023 --rc genhtml_legend=1 00:21:54.023 --rc geninfo_all_blocks=1 00:21:54.023 --rc geninfo_unexecuted_blocks=1 00:21:54.023 00:21:54.023 ' 00:21:54.023 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:54.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.023 --rc genhtml_branch_coverage=1 00:21:54.023 --rc genhtml_function_coverage=1 00:21:54.023 --rc genhtml_legend=1 00:21:54.024 --rc geninfo_all_blocks=1 00:21:54.024 --rc geninfo_unexecuted_blocks=1 00:21:54.024 00:21:54.024 ' 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:54.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.024 --rc genhtml_branch_coverage=1 00:21:54.024 --rc genhtml_function_coverage=1 00:21:54.024 --rc genhtml_legend=1 00:21:54.024 --rc geninfo_all_blocks=1 00:21:54.024 --rc geninfo_unexecuted_blocks=1 00:21:54.024 00:21:54.024 ' 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:54.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.024 --rc genhtml_branch_coverage=1 00:21:54.024 --rc genhtml_function_coverage=1 00:21:54.024 --rc genhtml_legend=1 00:21:54.024 --rc geninfo_all_blocks=1 00:21:54.024 --rc geninfo_unexecuted_blocks=1 00:21:54.024 00:21:54.024 ' 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.024 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.283 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.283 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:54.284 Cannot find device "nvmf_init_br" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:54.284 Cannot find device "nvmf_init_br2" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:54.284 Cannot find device "nvmf_tgt_br" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:54.284 Cannot find device "nvmf_tgt_br2" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:54.284 Cannot find device "nvmf_init_br" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:54.284 Cannot find device "nvmf_init_br2" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:54.284 Cannot find device "nvmf_tgt_br" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:54.284 Cannot find device "nvmf_tgt_br2" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:54.284 Cannot find device "nvmf_br" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:54.284 Cannot find device "nvmf_init_if" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:54.284 Cannot find device "nvmf_init_if2" 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:54.284 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:54.543 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:54.543 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:54.543 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:54.543 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:54.543 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:54.543 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:54.543 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:54.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:54.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:21:54.544 00:21:54.544 --- 10.0.0.3 ping statistics --- 00:21:54.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.544 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:54.544 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:54.544 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:21:54.544 00:21:54.544 --- 10.0.0.4 ping statistics --- 00:21:54.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.544 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:54.544 15:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:54.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:54.544 00:21:54.544 --- 10.0.0.1 ping statistics --- 00:21:54.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.544 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:54.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:21:54.544 00:21:54.544 --- 10.0.0.2 ping statistics --- 00:21:54.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.544 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:54.544 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=96616 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 96616 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96616 ']' 00:21:54.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.803 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:54.803 [2024-11-20 15:29:41.102568] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:21:54.803 [2024-11-20 15:29:41.103170] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.803 [2024-11-20 15:29:41.257109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:55.063 [2024-11-20 15:29:41.320203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.063 [2024-11-20 15:29:41.320272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.063 [2024-11-20 15:29:41.320283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.063 [2024-11-20 15:29:41.320291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.063 [2024-11-20 15:29:41.320313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.063 [2024-11-20 15:29:41.321658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.063 [2024-11-20 15:29:41.321660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.632 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.632 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:55.632 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.632 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.632 15:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:55.632 15:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.632 15:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96616 00:21:55.632 15:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:55.891 [2024-11-20 15:29:42.242346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.891 15:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:56.150 Malloc0 00:21:56.150 15:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:56.409 15:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:56.669 15:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:56.669 [2024-11-20 15:29:43.145941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:56.929 [2024-11-20 15:29:43.373662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96715 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96715 /var/tmp/bdevperf.sock 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96715 ']' 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.929 15:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:58.306 15:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.306 15:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:58.306 15:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:58.306 15:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:58.565 Nvme0n1 00:21:58.565 15:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:58.823 Nvme0n1 00:21:58.823 15:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:58.823 15:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:00.200 15:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:00.200 15:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:00.200 15:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:00.459 15:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:00.459 15:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96616 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:00.459 15:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96804 00:22:00.459 15:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.057 Attaching 4 probes... 00:22:07.057 @path[10.0.0.3, 4421]: 22896 00:22:07.057 @path[10.0.0.3, 4421]: 23264 00:22:07.057 @path[10.0.0.3, 4421]: 23224 00:22:07.057 @path[10.0.0.3, 4421]: 23433 00:22:07.057 @path[10.0.0.3, 4421]: 23482 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96804 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:07.057 15:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:07.057 15:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:07.057 15:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:07.057 15:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96935 00:22:07.057 15:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:07.057 15:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96616 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:13.620 Attaching 4 probes... 00:22:13.620 @path[10.0.0.3, 4420]: 21192 00:22:13.620 @path[10.0.0.3, 4420]: 21249 00:22:13.620 @path[10.0.0.3, 4420]: 22905 00:22:13.620 @path[10.0.0.3, 4420]: 23002 00:22:13.620 @path[10.0.0.3, 4420]: 22999 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96935 00:22:13.620 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:13.621 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:13.621 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:13.621 15:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:13.878 15:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:13.878 15:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97071 00:22:13.878 15:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96616 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:13.878 15:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:20.440 Attaching 4 probes... 00:22:20.440 @path[10.0.0.3, 4421]: 18479 00:22:20.440 @path[10.0.0.3, 4421]: 22805 00:22:20.440 @path[10.0.0.3, 4421]: 22847 00:22:20.440 @path[10.0.0.3, 4421]: 23008 00:22:20.440 @path[10.0.0.3, 4421]: 22929 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97071 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97196 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96616 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:20.440 15:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:27.033 15:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:27.033 15:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:27.033 Attaching 4 probes... 00:22:27.033 00:22:27.033 00:22:27.033 00:22:27.033 00:22:27.033 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97196 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96616 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97331 00:22:27.033 15:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:33.602 Attaching 4 probes... 00:22:33.602 @path[10.0.0.3, 4421]: 22125 00:22:33.602 @path[10.0.0.3, 4421]: 21981 00:22:33.602 @path[10.0.0.3, 4421]: 21376 00:22:33.602 @path[10.0.0.3, 4421]: 22018 00:22:33.602 @path[10.0.0.3, 4421]: 21348 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97331 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:33.602 15:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:33.602 [2024-11-20 15:30:19.981544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.602 [2024-11-20 15:30:19.981880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.981993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 [2024-11-20 15:30:19.982297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698eb0 is same with the state(6) to be set 00:22:33.603 15:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:34.538 15:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:34.538 15:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97463 00:22:34.538 15:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96616 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:34.538 15:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:41.108 Attaching 4 probes... 00:22:41.108 @path[10.0.0.3, 4420]: 22036 00:22:41.108 @path[10.0.0.3, 4420]: 22599 00:22:41.108 @path[10.0.0.3, 4420]: 22541 00:22:41.108 @path[10.0.0.3, 4420]: 22751 00:22:41.108 @path[10.0.0.3, 4420]: 22755 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97463 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:41.108 [2024-11-20 15:30:27.480698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:41.108 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:41.366 15:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:47.933 15:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:47.933 15:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97655 00:22:47.933 15:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96616 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:47.933 15:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:54.519 Attaching 4 probes... 00:22:54.519 @path[10.0.0.3, 4421]: 21415 00:22:54.519 @path[10.0.0.3, 4421]: 21850 00:22:54.519 @path[10.0.0.3, 4421]: 21505 00:22:54.519 @path[10.0.0.3, 4421]: 22144 00:22:54.519 @path[10.0.0.3, 4421]: 21895 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97655 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96715 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96715 ']' 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96715 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.519 15:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96715 00:22:54.519 killing process with pid 96715 00:22:54.519 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.519 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.520 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96715' 00:22:54.520 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96715 00:22:54.520 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96715 00:22:54.520 { 00:22:54.520 "results": [ 00:22:54.520 { 00:22:54.520 "job": "Nvme0n1", 00:22:54.520 "core_mask": "0x4", 00:22:54.520 "workload": "verify", 00:22:54.520 "status": "terminated", 00:22:54.520 "verify_range": { 00:22:54.520 "start": 0, 00:22:54.520 "length": 16384 00:22:54.520 }, 00:22:54.520 "queue_depth": 128, 00:22:54.520 "io_size": 4096, 00:22:54.520 "runtime": 54.751909, 00:22:54.520 "iops": 9571.100068857873, 00:22:54.520 "mibps": 37.38710964397607, 00:22:54.520 "io_failed": 0, 00:22:54.520 "io_timeout": 0, 00:22:54.520 "avg_latency_us": 13355.889450214356, 00:22:54.520 "min_latency_us": 585.6128514056225, 00:22:54.520 "max_latency_us": 7061253.963052209 00:22:54.520 } 00:22:54.520 ], 00:22:54.520 "core_count": 1 00:22:54.520 } 00:22:54.520 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96715 00:22:54.520 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:54.520 [2024-11-20 15:29:43.453083] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:22:54.520 [2024-11-20 15:29:43.453179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96715 ] 00:22:54.520 [2024-11-20 15:29:43.590708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.520 [2024-11-20 15:29:43.659467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.520 Running I/O for 90 seconds... 00:22:54.520 11921.00 IOPS, 46.57 MiB/s [2024-11-20T15:30:41.002Z] 11905.00 IOPS, 46.50 MiB/s [2024-11-20T15:30:41.002Z] 11833.33 IOPS, 46.22 MiB/s [2024-11-20T15:30:41.002Z] 11772.25 IOPS, 45.99 MiB/s [2024-11-20T15:30:41.002Z] 11736.80 IOPS, 45.85 MiB/s [2024-11-20T15:30:41.002Z] 11726.50 IOPS, 45.81 MiB/s [2024-11-20T15:30:41.002Z] 11727.71 IOPS, 45.81 MiB/s [2024-11-20T15:30:41.002Z] 11723.00 IOPS, 45.79 MiB/s [2024-11-20T15:30:41.002Z] [2024-11-20 15:29:53.392493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.520 [2024-11-20 15:29:53.392553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.520 [2024-11-20 15:29:53.392622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.520 [2024-11-20 15:29:53.392656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.520 [2024-11-20 15:29:53.392702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.520 [2024-11-20 15:29:53.392734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.520 [2024-11-20 15:29:53.392766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.520 [2024-11-20 15:29:53.392798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.392830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.392862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.392893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.392951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.392969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.392983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.520 [2024-11-20 15:29:53.393430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:54.520 [2024-11-20 15:29:53.393448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.393980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.393998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.394011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.394043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.394075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.394111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.521 [2024-11-20 15:29:53.394627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.394965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.394984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.395005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.395025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.395038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.395057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.395070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.395089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.521 [2024-11-20 15:29:53.395103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:54.521 [2024-11-20 15:29:53.395122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.395974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.395987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.396005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.396018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.396036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.396049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.396068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.396081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.396099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.396112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.396130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.396148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.396166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.396179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:54.522 [2024-11-20 15:29:53.396198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.522 [2024-11-20 15:29:53.396216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.396691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.396704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:53.397447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:53.397980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.523 [2024-11-20 15:29:53.397993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:54.523 11609.78 IOPS, 45.35 MiB/s [2024-11-20T15:30:41.005Z] 11531.10 IOPS, 45.04 MiB/s [2024-11-20T15:30:41.005Z] 11468.45 IOPS, 44.80 MiB/s [2024-11-20T15:30:41.005Z] 11471.42 IOPS, 44.81 MiB/s [2024-11-20T15:30:41.005Z] 11470.85 IOPS, 44.81 MiB/s [2024-11-20T15:30:41.005Z] 11472.07 IOPS, 44.81 MiB/s [2024-11-20T15:30:41.005Z] [2024-11-20 15:29:59.890631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:59.890716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:59.890773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:59.890788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:59.890808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:59.890822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:59.890840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.523 [2024-11-20 15:29:59.890882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:54.523 [2024-11-20 15:29:59.890900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.890914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.890932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.890946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.890964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.890977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.890995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.891969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.891998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.892013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.892032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.892047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.892066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.892080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.892100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.892114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.892133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.892147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.524 [2024-11-20 15:29:59.892166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.524 [2024-11-20 15:29:59.892179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.525 [2024-11-20 15:29:59.892733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.892767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.892802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.892836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.892870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.892908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.892942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.892976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.892996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:54.525 [2024-11-20 15:29:59.893442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.525 [2024-11-20 15:29:59.893455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.526 [2024-11-20 15:29:59.893489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.893967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.893989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.526 [2024-11-20 15:29:59.894652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.526 [2024-11-20 15:29:59.894697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.526 [2024-11-20 15:29:59.894738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.526 [2024-11-20 15:29:59.894775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.526 [2024-11-20 15:29:59.894810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.526 [2024-11-20 15:29:59.894846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.526 [2024-11-20 15:29:59.894881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.526 [2024-11-20 15:29:59.894916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:54.526 [2024-11-20 15:29:59.894938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.894951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.894972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.894986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:29:59.895345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:29:59.895380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:29:59.895416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:29:59.895451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:29:59.895487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:29:59.895522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:29:59.895557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:29:59.895592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:29:59.895619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:29:59.895633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:54.527 11127.73 IOPS, 43.47 MiB/s [2024-11-20T15:30:41.009Z] 10754.50 IOPS, 42.01 MiB/s [2024-11-20T15:30:41.009Z] 10792.24 IOPS, 42.16 MiB/s [2024-11-20T15:30:41.009Z] 10826.00 IOPS, 42.29 MiB/s [2024-11-20T15:30:41.009Z] 10860.11 IOPS, 42.42 MiB/s [2024-11-20T15:30:41.009Z] 10889.00 IOPS, 42.54 MiB/s [2024-11-20T15:30:41.009Z] 10919.86 IOPS, 42.66 MiB/s [2024-11-20T15:30:41.009Z] [2024-11-20 15:30:06.807963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.808039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.808095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.808111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.808131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.808145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.808164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.808178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.527 [2024-11-20 15:30:06.809450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:30:06.809484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:30:06.809517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:30:06.809552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:54.527 [2024-11-20 15:30:06.809571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.527 [2024-11-20 15:30:06.809587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.809969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.809988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.528 [2024-11-20 15:30:06.810917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:54.528 [2024-11-20 15:30:06.810937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.810950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.810970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.810983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.811016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.811054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.811088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.811121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.811157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.811190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.529 [2024-11-20 15:30:06.811420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.811975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.811988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:54.529 [2024-11-20 15:30:06.812303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.529 [2024-11-20 15:30:06.812316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:06.812902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:06.812915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:54.530 10667.23 IOPS, 41.67 MiB/s [2024-11-20T15:30:41.012Z] 10203.43 IOPS, 39.86 MiB/s [2024-11-20T15:30:41.012Z] 9778.29 IOPS, 38.20 MiB/s [2024-11-20T15:30:41.012Z] 9387.16 IOPS, 36.67 MiB/s [2024-11-20T15:30:41.012Z] 9026.12 IOPS, 35.26 MiB/s [2024-11-20T15:30:41.012Z] 8691.81 IOPS, 33.95 MiB/s [2024-11-20T15:30:41.012Z] 8381.39 IOPS, 32.74 MiB/s [2024-11-20T15:30:41.012Z] 8291.10 IOPS, 32.39 MiB/s [2024-11-20T15:30:41.012Z] 8391.73 IOPS, 32.78 MiB/s [2024-11-20T15:30:41.012Z] 8468.81 IOPS, 33.08 MiB/s [2024-11-20T15:30:41.012Z] 8540.91 IOPS, 33.36 MiB/s [2024-11-20T15:30:41.012Z] 8614.21 IOPS, 33.65 MiB/s [2024-11-20T15:30:41.012Z] 8671.26 IOPS, 33.87 MiB/s [2024-11-20T15:30:41.012Z] [2024-11-20 15:30:19.983410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.530 [2024-11-20 15:30:19.983459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.983978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.983992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.530 [2024-11-20 15:30:19.984254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.530 [2024-11-20 15:30:19.984268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.984606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.984974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.984988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.985001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.985027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.531 [2024-11-20 15:30:19.985055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.531 [2024-11-20 15:30:19.985356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.531 [2024-11-20 15:30:19.985369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.985977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.985989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.532 [2024-11-20 15:30:19.986446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.532 [2024-11-20 15:30:19.986459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.533 [2024-11-20 15:30:19.986854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.986896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56904 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.986908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.986940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.986950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56912 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.986962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.986975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.986985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.986994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56920 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56928 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56936 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56944 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56952 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56960 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56968 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56976 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56984 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.533 [2024-11-20 15:30:19.987384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.533 [2024-11-20 15:30:19.987393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56992 len:8 PRP1 0x0 PRP2 0x0 00:22:54.533 [2024-11-20 15:30:19.987405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.533 [2024-11-20 15:30:19.987418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.534 [2024-11-20 15:30:19.987427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.534 [2024-11-20 15:30:19.987436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57000 len:8 PRP1 0x0 PRP2 0x0 00:22:54.534 [2024-11-20 15:30:19.987448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.534 [2024-11-20 15:30:19.987461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.534 [2024-11-20 15:30:19.987470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.534 [2024-11-20 15:30:19.987479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57008 len:8 PRP1 0x0 PRP2 0x0 00:22:54.534 [2024-11-20 15:30:19.987492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.534 [2024-11-20 15:30:19.988809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:54.534 [2024-11-20 15:30:19.988888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.534 [2024-11-20 15:30:19.988906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.534 [2024-11-20 15:30:19.988935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4190 (9): Bad file descriptor 00:22:54.534 [2024-11-20 15:30:19.989064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.534 [2024-11-20 15:30:19.989086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd4190 with addr=10.0.0.3, port=4421 00:22:54.534 [2024-11-20 15:30:19.989100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd4190 is same with the state(6) to be set 00:22:54.534 [2024-11-20 15:30:19.989120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4190 (9): Bad file descriptor 00:22:54.534 [2024-11-20 15:30:19.989140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:54.534 [2024-11-20 15:30:19.989166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:54.534 [2024-11-20 15:30:19.989180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:54.534 [2024-11-20 15:30:19.989193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:54.534 [2024-11-20 15:30:19.989207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:54.534 8731.03 IOPS, 34.11 MiB/s [2024-11-20T15:30:41.016Z] 8801.78 IOPS, 34.38 MiB/s [2024-11-20T15:30:41.016Z] 8864.54 IOPS, 34.63 MiB/s [2024-11-20T15:30:41.016Z] 8926.26 IOPS, 34.87 MiB/s [2024-11-20T15:30:41.016Z] 8984.38 IOPS, 35.10 MiB/s [2024-11-20T15:30:41.016Z] 9044.90 IOPS, 35.33 MiB/s [2024-11-20T15:30:41.016Z] 9102.15 IOPS, 35.56 MiB/s [2024-11-20T15:30:41.016Z] 9151.86 IOPS, 35.75 MiB/s [2024-11-20T15:30:41.016Z] 9201.86 IOPS, 35.94 MiB/s [2024-11-20T15:30:41.016Z] 9248.07 IOPS, 36.13 MiB/s [2024-11-20T15:30:41.016Z] [2024-11-20 15:30:30.035820] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:54.534 9288.84 IOPS, 36.28 MiB/s [2024-11-20T15:30:41.016Z] 9322.72 IOPS, 36.42 MiB/s [2024-11-20T15:30:41.016Z] 9357.77 IOPS, 36.55 MiB/s [2024-11-20T15:30:41.016Z] 9390.33 IOPS, 36.68 MiB/s [2024-11-20T15:30:41.016Z] 9418.78 IOPS, 36.79 MiB/s [2024-11-20T15:30:41.016Z] 9446.20 IOPS, 36.90 MiB/s [2024-11-20T15:30:41.016Z] 9476.49 IOPS, 37.02 MiB/s [2024-11-20T15:30:41.016Z] 9500.77 IOPS, 37.11 MiB/s [2024-11-20T15:30:41.016Z] 9529.06 IOPS, 37.22 MiB/s [2024-11-20T15:30:41.016Z] 9555.28 IOPS, 37.33 MiB/s [2024-11-20T15:30:41.016Z] Received shutdown signal, test time was about 54.752568 seconds 00:22:54.534 00:22:54.534 Latency(us) 00:22:54.534 [2024-11-20T15:30:41.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.534 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.534 Verification LBA range: start 0x0 length 0x4000 00:22:54.534 Nvme0n1 : 54.75 9571.10 37.39 0.00 0.00 13355.89 585.61 7061253.96 00:22:54.534 [2024-11-20T15:30:41.016Z] =================================================================================================================== 00:22:54.534 [2024-11-20T15:30:41.016Z] Total : 9571.10 37.39 0.00 0.00 13355.89 585.61 7061253.96 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.534 rmmod nvme_tcp 00:22:54.534 rmmod nvme_fabrics 00:22:54.534 rmmod nvme_keyring 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 96616 ']' 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 96616 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96616 ']' 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96616 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96616 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.534 killing process with pid 96616 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96616' 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96616 00:22:54.534 15:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96616 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.794 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:55.053 ************************************ 00:22:55.053 END TEST nvmf_host_multipath 00:22:55.053 ************************************ 00:22:55.053 00:22:55.053 real 1m1.054s 00:22:55.053 user 2m47.105s 00:22:55.053 sys 0m18.714s 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.053 ************************************ 00:22:55.053 START TEST nvmf_timeout 00:22:55.053 ************************************ 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:55.053 * Looking for test storage... 00:22:55.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:55.053 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:55.312 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:55.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.313 --rc genhtml_branch_coverage=1 00:22:55.313 --rc genhtml_function_coverage=1 00:22:55.313 --rc genhtml_legend=1 00:22:55.313 --rc geninfo_all_blocks=1 00:22:55.313 --rc geninfo_unexecuted_blocks=1 00:22:55.313 00:22:55.313 ' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:55.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.313 --rc genhtml_branch_coverage=1 00:22:55.313 --rc genhtml_function_coverage=1 00:22:55.313 --rc genhtml_legend=1 00:22:55.313 --rc geninfo_all_blocks=1 00:22:55.313 --rc geninfo_unexecuted_blocks=1 00:22:55.313 00:22:55.313 ' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:55.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.313 --rc genhtml_branch_coverage=1 00:22:55.313 --rc genhtml_function_coverage=1 00:22:55.313 --rc genhtml_legend=1 00:22:55.313 --rc geninfo_all_blocks=1 00:22:55.313 --rc geninfo_unexecuted_blocks=1 00:22:55.313 00:22:55.313 ' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:55.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.313 --rc genhtml_branch_coverage=1 00:22:55.313 --rc genhtml_function_coverage=1 00:22:55.313 --rc genhtml_legend=1 00:22:55.313 --rc geninfo_all_blocks=1 00:22:55.313 --rc geninfo_unexecuted_blocks=1 00:22:55.313 00:22:55.313 ' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.313 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:55.313 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:55.314 Cannot find device "nvmf_init_br" 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:55.314 Cannot find device "nvmf_init_br2" 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:55.314 Cannot find device "nvmf_tgt_br" 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:55.314 Cannot find device "nvmf_tgt_br2" 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:55.314 Cannot find device "nvmf_init_br" 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:55.314 Cannot find device "nvmf_init_br2" 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:55.314 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:55.573 Cannot find device "nvmf_tgt_br" 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:55.573 Cannot find device "nvmf_tgt_br2" 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:55.573 Cannot find device "nvmf_br" 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:55.573 Cannot find device "nvmf_init_if" 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:55.573 Cannot find device "nvmf_init_if2" 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:55.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:55.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:55.573 15:30:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:55.573 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:55.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:55.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:22:55.833 00:22:55.833 --- 10.0.0.3 ping statistics --- 00:22:55.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.833 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:55.833 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:55.833 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:22:55.833 00:22:55.833 --- 10.0.0.4 ping statistics --- 00:22:55.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.833 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:55.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:22:55.833 00:22:55.833 --- 10.0.0.1 ping statistics --- 00:22:55.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.833 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:55.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:22:55.833 00:22:55.833 --- 10.0.0.2 ping statistics --- 00:22:55.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.833 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=98030 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 98030 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98030 ']' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.833 15:30:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:55.833 [2024-11-20 15:30:42.282510] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:22:55.833 [2024-11-20 15:30:42.282571] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.093 [2024-11-20 15:30:42.433509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:56.093 [2024-11-20 15:30:42.501153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.093 [2024-11-20 15:30:42.501199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.093 [2024-11-20 15:30:42.501209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.093 [2024-11-20 15:30:42.501217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.093 [2024-11-20 15:30:42.501224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.093 [2024-11-20 15:30:42.502609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.093 [2024-11-20 15:30:42.502611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:57.032 [2024-11-20 15:30:43.411993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.032 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:57.290 Malloc0 00:22:57.291 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.549 15:30:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.808 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:58.068 [2024-11-20 15:30:44.296031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=98121 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 98121 /var/tmp/bdevperf.sock 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98121 ']' 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.068 15:30:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:58.068 [2024-11-20 15:30:44.357719] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:22:58.068 [2024-11-20 15:30:44.357783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98121 ] 00:22:58.068 [2024-11-20 15:30:44.509186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.327 [2024-11-20 15:30:44.573788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.894 15:30:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.894 15:30:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:58.894 15:30:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:59.154 15:30:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:59.413 NVMe0n1 00:22:59.413 15:30:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=98169 00:22:59.413 15:30:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.413 15:30:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:59.413 Running I/O for 10 seconds... 00:23:00.363 15:30:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:00.623 11921.00 IOPS, 46.57 MiB/s [2024-11-20T15:30:47.105Z] [2024-11-20 15:30:46.974041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.974784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.974872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.974917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.974959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.623 [2024-11-20 15:30:46.975715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.975758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.975803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.975847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.975887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.975927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.975973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.976803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2510 is same with the state(6) to be set 00:23:00.624 [2024-11-20 15:30:46.977410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.624 [2024-11-20 15:30:46.977964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.624 [2024-11-20 15:30:46.977973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.977982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.977991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.625 [2024-11-20 15:30:46.978664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.625 [2024-11-20 15:30:46.978694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.625 [2024-11-20 15:30:46.978713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.625 [2024-11-20 15:30:46.978723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.625 [2024-11-20 15:30:46.978732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.978987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.978995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.979014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.979033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.979051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.979070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.979089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.626 [2024-11-20 15:30:46.979107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.626 [2024-11-20 15:30:46.979442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.626 [2024-11-20 15:30:46.979452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.627 [2024-11-20 15:30:46.979757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.627 [2024-11-20 15:30:46.979776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.627 [2024-11-20 15:30:46.979794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.627 [2024-11-20 15:30:46.979813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.627 [2024-11-20 15:30:46.979835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.627 [2024-11-20 15:30:46.979856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.627 [2024-11-20 15:30:46.979875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.627 [2024-11-20 15:30:46.979894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.979904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f7780 is same with the state(6) to be set 00:23:00.627 [2024-11-20 15:30:46.979915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.627 [2024-11-20 15:30:46.979923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.627 [2024-11-20 15:30:46.979930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107304 len:8 PRP1 0x0 PRP2 0x0 00:23:00.627 [2024-11-20 15:30:46.979939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.627 [2024-11-20 15:30:46.980192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:00.627 [2024-11-20 15:30:46.980261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228bf50 (9): Bad file descriptor 00:23:00.627 [2024-11-20 15:30:46.980371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.627 [2024-11-20 15:30:46.980385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228bf50 with addr=10.0.0.3, port=4420 00:23:00.627 [2024-11-20 15:30:46.980395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228bf50 is same with the state(6) to be set 00:23:00.627 [2024-11-20 15:30:46.980410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228bf50 (9): Bad file descriptor 00:23:00.627 [2024-11-20 15:30:46.980424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:00.627 [2024-11-20 15:30:46.980433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:00.627 [2024-11-20 15:30:46.980443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:00.627 [2024-11-20 15:30:46.980454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:00.627 [2024-11-20 15:30:46.980464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:00.627 15:30:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:02.500 6691.50 IOPS, 26.14 MiB/s [2024-11-20T15:30:48.982Z] 4461.00 IOPS, 17.43 MiB/s [2024-11-20T15:30:48.982Z] [2024-11-20 15:30:48.977487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.500 [2024-11-20 15:30:48.977572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228bf50 with addr=10.0.0.3, port=4420 00:23:02.500 [2024-11-20 15:30:48.977604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228bf50 is same with the state(6) to be set 00:23:02.500 [2024-11-20 15:30:48.977633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228bf50 (9): Bad file descriptor 00:23:02.500 [2024-11-20 15:30:48.977653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:02.500 [2024-11-20 15:30:48.977663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:02.500 [2024-11-20 15:30:48.977676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:02.500 [2024-11-20 15:30:48.977703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:02.500 [2024-11-20 15:30:48.977718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:02.759 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:02.759 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:02.759 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:02.759 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:02.759 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:02.759 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:02.759 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:03.016 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:03.016 15:30:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:04.518 3345.75 IOPS, 13.07 MiB/s [2024-11-20T15:30:51.000Z] 2676.60 IOPS, 10.46 MiB/s [2024-11-20T15:30:51.000Z] [2024-11-20 15:30:50.974644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.518 [2024-11-20 15:30:50.974709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228bf50 with addr=10.0.0.3, port=4420 00:23:04.518 [2024-11-20 15:30:50.974725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228bf50 is same with the state(6) to be set 00:23:04.518 [2024-11-20 15:30:50.974751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228bf50 (9): Bad file descriptor 00:23:04.518 [2024-11-20 15:30:50.974769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:04.518 [2024-11-20 15:30:50.974779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:04.518 [2024-11-20 15:30:50.974791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:04.518 [2024-11-20 15:30:50.974803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:04.518 [2024-11-20 15:30:50.974815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:06.394 2230.50 IOPS, 8.71 MiB/s [2024-11-20T15:30:53.136Z] 1911.86 IOPS, 7.47 MiB/s [2024-11-20T15:30:53.136Z] [2024-11-20 15:30:52.971694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:06.654 [2024-11-20 15:30:52.971842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:06.654 [2024-11-20 15:30:52.971985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:06.654 [2024-11-20 15:30:52.972037] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:23:06.654 [2024-11-20 15:30:52.972088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:07.592 1672.88 IOPS, 6.53 MiB/s 00:23:07.592 Latency(us) 00:23:07.592 [2024-11-20T15:30:54.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.592 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:07.592 Verification LBA range: start 0x0 length 0x4000 00:23:07.592 NVMe0n1 : 8.12 1647.32 6.43 15.76 0.00 76913.04 1638.40 7061253.96 00:23:07.592 [2024-11-20T15:30:54.074Z] =================================================================================================================== 00:23:07.592 [2024-11-20T15:30:54.074Z] Total : 1647.32 6.43 15.76 0.00 76913.04 1638.40 7061253.96 00:23:07.592 { 00:23:07.592 "results": [ 00:23:07.592 { 00:23:07.592 "job": "NVMe0n1", 00:23:07.592 "core_mask": "0x4", 00:23:07.592 "workload": "verify", 00:23:07.592 "status": "finished", 00:23:07.592 "verify_range": { 00:23:07.592 "start": 0, 00:23:07.592 "length": 16384 00:23:07.592 }, 00:23:07.592 "queue_depth": 128, 00:23:07.592 "io_size": 4096, 00:23:07.592 "runtime": 8.124083, 00:23:07.592 "iops": 1647.324381102458, 00:23:07.592 "mibps": 6.434860863681476, 00:23:07.592 "io_failed": 128, 00:23:07.592 "io_timeout": 0, 00:23:07.592 "avg_latency_us": 76913.04029041932, 00:23:07.592 "min_latency_us": 1638.4, 00:23:07.592 "max_latency_us": 7061253.963052209 00:23:07.592 } 00:23:07.592 ], 00:23:07.592 "core_count": 1 00:23:07.592 } 00:23:08.161 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:08.161 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.161 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:08.420 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:08.420 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:08.420 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:08.420 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 98169 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 98121 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98121 ']' 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98121 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98121 00:23:08.678 killing process with pid 98121 00:23:08.678 Received shutdown signal, test time was about 9.121149 seconds 00:23:08.678 00:23:08.678 Latency(us) 00:23:08.678 [2024-11-20T15:30:55.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.678 [2024-11-20T15:30:55.160Z] =================================================================================================================== 00:23:08.678 [2024-11-20T15:30:55.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98121' 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98121 00:23:08.678 15:30:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98121 00:23:08.936 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:08.936 [2024-11-20 15:30:55.414905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=98322 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 98322 /var/tmp/bdevperf.sock 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98322 ']' 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.195 15:30:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:09.195 [2024-11-20 15:30:55.489740] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:23:09.195 [2024-11-20 15:30:55.489827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98322 ] 00:23:09.195 [2024-11-20 15:30:55.641579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.453 [2024-11-20 15:30:55.705844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.041 15:30:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.041 15:30:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:10.041 15:30:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:10.312 15:30:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:10.570 NVMe0n1 00:23:10.570 15:30:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.570 15:30:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98374 00:23:10.570 15:30:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:10.570 Running I/O for 10 seconds... 00:23:11.505 15:30:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:11.766 12162.00 IOPS, 47.51 MiB/s [2024-11-20T15:30:58.248Z] [2024-11-20 15:30:58.073194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.766 [2024-11-20 15:30:58.073253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.766 [2024-11-20 15:30:58.073274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.766 [2024-11-20 15:30:58.073285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.766 [2024-11-20 15:30:58.073296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.766 [2024-11-20 15:30:58.073306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.766 [2024-11-20 15:30:58.073317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.766 [2024-11-20 15:30:58.073326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.766 [2024-11-20 15:30:58.073336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.766 [2024-11-20 15:30:58.073345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.766 [2024-11-20 15:30:58.073355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.766 [2024-11-20 15:30:58.073364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.766 [2024-11-20 15:30:58.073374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.766 [2024-11-20 15:30:58.073383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.766 [2024-11-20 15:30:58.073393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.766 [2024-11-20 15:30:58.073402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.767 [2024-11-20 15:30:58.073986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.767 [2024-11-20 15:30:58.073996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.768 [2024-11-20 15:30:58.074365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.768 [2024-11-20 15:30:58.074694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.768 [2024-11-20 15:30:58.074704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.074850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.074990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.074999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.075018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.075038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.075057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.075076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.075094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.075113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.769 [2024-11-20 15:30:58.075132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.769 [2024-11-20 15:30:58.075429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.769 [2024-11-20 15:30:58.075437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.770 [2024-11-20 15:30:58.075456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.770 [2024-11-20 15:30:58.075474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.770 [2024-11-20 15:30:58.075492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.770 [2024-11-20 15:30:58.075510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.770 [2024-11-20 15:30:58.075529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.770 [2024-11-20 15:30:58.075548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.770 [2024-11-20 15:30:58.075571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.770 [2024-11-20 15:30:58.075589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.770 [2024-11-20 15:30:58.075608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.770 [2024-11-20 15:30:58.075626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.770 [2024-11-20 15:30:58.075647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.770 [2024-11-20 15:30:58.075666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.770 [2024-11-20 15:30:58.075692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:11.770 [2024-11-20 15:30:58.075726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:11.770 [2024-11-20 15:30:58.075734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109040 len:8 PRP1 0x0 PRP2 0x0 00:23:11.770 [2024-11-20 15:30:58.075743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.770 [2024-11-20 15:30:58.075997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:11.770 [2024-11-20 15:30:58.076201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2133f50 (9): Bad file descriptor 00:23:11.770 [2024-11-20 15:30:58.076312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.770 [2024-11-20 15:30:58.076327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2133f50 with addr=10.0.0.3, port=4420 00:23:11.770 [2024-11-20 15:30:58.076337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2133f50 is same with the state(6) to be set 00:23:11.770 [2024-11-20 15:30:58.076352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2133f50 (9): Bad file descriptor 00:23:11.770 [2024-11-20 15:30:58.076366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:11.770 [2024-11-20 15:30:58.076375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:11.770 [2024-11-20 15:30:58.076387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:11.770 [2024-11-20 15:30:58.076398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:11.770 [2024-11-20 15:30:58.076409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:11.770 15:30:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:12.706 6791.50 IOPS, 26.53 MiB/s [2024-11-20T15:30:59.188Z] [2024-11-20 15:30:59.074884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.706 [2024-11-20 15:30:59.074921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2133f50 with addr=10.0.0.3, port=4420 00:23:12.706 [2024-11-20 15:30:59.074933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2133f50 is same with the state(6) to be set 00:23:12.706 [2024-11-20 15:30:59.074950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2133f50 (9): Bad file descriptor 00:23:12.706 [2024-11-20 15:30:59.074965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:12.706 [2024-11-20 15:30:59.074975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:12.706 [2024-11-20 15:30:59.074987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:12.706 [2024-11-20 15:30:59.074997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:12.706 [2024-11-20 15:30:59.075008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:12.706 15:30:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:12.964 [2024-11-20 15:30:59.310319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:12.964 15:30:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 98374 00:23:13.788 4527.67 IOPS, 17.69 MiB/s [2024-11-20T15:31:00.270Z] [2024-11-20 15:31:00.085119] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:15.659 3395.75 IOPS, 13.26 MiB/s [2024-11-20T15:31:03.077Z] 4891.60 IOPS, 19.11 MiB/s [2024-11-20T15:31:04.012Z] 6136.83 IOPS, 23.97 MiB/s [2024-11-20T15:31:05.153Z] 7018.86 IOPS, 27.42 MiB/s [2024-11-20T15:31:06.090Z] 7686.00 IOPS, 30.02 MiB/s [2024-11-20T15:31:07.027Z] 8220.89 IOPS, 32.11 MiB/s [2024-11-20T15:31:07.027Z] 8626.50 IOPS, 33.70 MiB/s 00:23:20.545 Latency(us) 00:23:20.545 [2024-11-20T15:31:07.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.545 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:20.545 Verification LBA range: start 0x0 length 0x4000 00:23:20.545 NVMe0n1 : 10.01 8631.13 33.72 0.00 0.00 14807.82 1664.72 3018551.31 00:23:20.545 [2024-11-20T15:31:07.027Z] =================================================================================================================== 00:23:20.545 [2024-11-20T15:31:07.027Z] Total : 8631.13 33.72 0.00 0.00 14807.82 1664.72 3018551.31 00:23:20.545 { 00:23:20.545 "results": [ 00:23:20.545 { 00:23:20.545 "job": "NVMe0n1", 00:23:20.545 "core_mask": "0x4", 00:23:20.545 "workload": "verify", 00:23:20.545 "status": "finished", 00:23:20.545 "verify_range": { 00:23:20.545 "start": 0, 00:23:20.545 "length": 16384 00:23:20.545 }, 00:23:20.545 "queue_depth": 128, 00:23:20.545 "io_size": 4096, 00:23:20.545 "runtime": 10.009469, 00:23:20.545 "iops": 8631.12718566789, 00:23:20.545 "mibps": 33.7153405690152, 00:23:20.545 "io_failed": 0, 00:23:20.545 "io_timeout": 0, 00:23:20.545 "avg_latency_us": 14807.81524912517, 00:23:20.545 "min_latency_us": 1664.7196787148594, 00:23:20.545 "max_latency_us": 3018551.3124497994 00:23:20.545 } 00:23:20.545 ], 00:23:20.545 "core_count": 1 00:23:20.545 } 00:23:20.545 15:31:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98486 00:23:20.545 15:31:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:20.545 15:31:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:20.804 Running I/O for 10 seconds... 00:23:21.745 15:31:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:21.745 11832.00 IOPS, 46.22 MiB/s [2024-11-20T15:31:08.227Z] [2024-11-20 15:31:08.177502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.745 [2024-11-20 15:31:08.177814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.177991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.746 [2024-11-20 15:31:08.178504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.178778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808d40 is same with the state(6) to be set 00:23:21.747 [2024-11-20 15:31:08.179302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-20 15:31:08.179791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.747 [2024-11-20 15:31:08.179801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.179987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.179996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.748 [2024-11-20 15:31:08.180392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.748 [2024-11-20 15:31:08.180411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.748 [2024-11-20 15:31:08.180430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.748 [2024-11-20 15:31:08.180449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.748 [2024-11-20 15:31:08.180467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.748 [2024-11-20 15:31:08.180488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.748 [2024-11-20 15:31:08.180507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.748 [2024-11-20 15:31:08.180517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.748 [2024-11-20 15:31:08.180526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.180990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.180999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.749 [2024-11-20 15:31:08.181268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.749 [2024-11-20 15:31:08.181277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.750 [2024-11-20 15:31:08.181736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.181746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e430 is same with the state(6) to be set 00:23:21.750 [2024-11-20 15:31:08.181757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:21.750 [2024-11-20 15:31:08.181765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:21.750 [2024-11-20 15:31:08.181772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105152 len:8 PRP1 0x0 PRP2 0x0 00:23:21.750 [2024-11-20 15:31:08.181781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.750 [2024-11-20 15:31:08.182013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:21.750 [2024-11-20 15:31:08.182082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2133f50 (9): Bad file descriptor 00:23:21.750 [2024-11-20 15:31:08.182175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.750 [2024-11-20 15:31:08.182195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2133f50 with addr=10.0.0.3, port=4420 00:23:21.750 [2024-11-20 15:31:08.182204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2133f50 is same with the state(6) to be set 00:23:21.750 [2024-11-20 15:31:08.182219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2133f50 (9): Bad file descriptor 00:23:21.750 [2024-11-20 15:31:08.182233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:21.750 [2024-11-20 15:31:08.182242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:21.750 [2024-11-20 15:31:08.182253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:21.750 [2024-11-20 15:31:08.182265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:21.750 [2024-11-20 15:31:08.182275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:21.750 15:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:22.946 6532.00 IOPS, 25.52 MiB/s [2024-11-20T15:31:09.428Z] [2024-11-20 15:31:09.180766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.946 [2024-11-20 15:31:09.180807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2133f50 with addr=10.0.0.3, port=4420 00:23:22.946 [2024-11-20 15:31:09.180821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2133f50 is same with the state(6) to be set 00:23:22.946 [2024-11-20 15:31:09.180854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2133f50 (9): Bad file descriptor 00:23:22.947 [2024-11-20 15:31:09.180871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:22.947 [2024-11-20 15:31:09.180881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:22.947 [2024-11-20 15:31:09.180893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:22.947 [2024-11-20 15:31:09.180904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:22.947 [2024-11-20 15:31:09.180915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:23.884 4354.67 IOPS, 17.01 MiB/s [2024-11-20T15:31:10.366Z] [2024-11-20 15:31:10.179405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.884 [2024-11-20 15:31:10.179590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2133f50 with addr=10.0.0.3, port=4420 00:23:23.884 [2024-11-20 15:31:10.179614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2133f50 is same with the state(6) to be set 00:23:23.884 [2024-11-20 15:31:10.179643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2133f50 (9): Bad file descriptor 00:23:23.884 [2024-11-20 15:31:10.179661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:23.884 [2024-11-20 15:31:10.179671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:23.884 [2024-11-20 15:31:10.179699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:23.884 [2024-11-20 15:31:10.179711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:23.884 [2024-11-20 15:31:10.179723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:24.836 3266.00 IOPS, 12.76 MiB/s [2024-11-20T15:31:11.318Z] [2024-11-20 15:31:11.180501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.836 [2024-11-20 15:31:11.180550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2133f50 with addr=10.0.0.3, port=4420 00:23:24.836 [2024-11-20 15:31:11.180563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2133f50 is same with the state(6) to be set 00:23:24.836 [2024-11-20 15:31:11.180788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2133f50 (9): Bad file descriptor 00:23:24.836 [2024-11-20 15:31:11.180973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:24.836 [2024-11-20 15:31:11.181028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:24.836 [2024-11-20 15:31:11.181040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:24.836 [2024-11-20 15:31:11.181051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:24.836 [2024-11-20 15:31:11.181061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:24.836 15:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:25.095 [2024-11-20 15:31:11.412348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:25.095 15:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98486 00:23:25.921 2612.80 IOPS, 10.21 MiB/s [2024-11-20T15:31:12.403Z] [2024-11-20 15:31:12.204731] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:23:27.796 3927.33 IOPS, 15.34 MiB/s [2024-11-20T15:31:15.215Z] 5094.00 IOPS, 19.90 MiB/s [2024-11-20T15:31:16.153Z] 5994.00 IOPS, 23.41 MiB/s [2024-11-20T15:31:17.089Z] 6681.89 IOPS, 26.10 MiB/s 00:23:30.607 Latency(us) 00:23:30.607 [2024-11-20T15:31:17.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.607 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:30.607 Verification LBA range: start 0x0 length 0x4000 00:23:30.607 NVMe0n1 : 10.00 7252.09 28.33 5266.70 0.00 10207.56 450.72 3018551.31 00:23:30.607 [2024-11-20T15:31:17.089Z] =================================================================================================================== 00:23:30.607 [2024-11-20T15:31:17.089Z] Total : 7252.09 28.33 5266.70 0.00 10207.56 0.00 3018551.31 00:23:30.607 { 00:23:30.607 "results": [ 00:23:30.607 { 00:23:30.607 "job": "NVMe0n1", 00:23:30.607 "core_mask": "0x4", 00:23:30.607 "workload": "verify", 00:23:30.607 "status": "finished", 00:23:30.607 "verify_range": { 00:23:30.607 "start": 0, 00:23:30.607 "length": 16384 00:23:30.607 }, 00:23:30.607 "queue_depth": 128, 00:23:30.607 "io_size": 4096, 00:23:30.607 "runtime": 10.002088, 00:23:30.607 "iops": 7252.085764492374, 00:23:30.607 "mibps": 28.328460017548338, 00:23:30.607 "io_failed": 52678, 00:23:30.607 "io_timeout": 0, 00:23:30.607 "avg_latency_us": 10207.564071174405, 00:23:30.607 "min_latency_us": 450.7244979919679, 00:23:30.607 "max_latency_us": 3018551.3124497994 00:23:30.607 } 00:23:30.607 ], 00:23:30.607 "core_count": 1 00:23:30.607 } 00:23:30.607 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 98322 00:23:30.607 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98322 ']' 00:23:30.607 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98322 00:23:30.866 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:30.866 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.866 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98322 00:23:30.866 killing process with pid 98322 00:23:30.866 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.866 00:23:30.866 Latency(us) 00:23:30.866 [2024-11-20T15:31:17.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.866 [2024-11-20T15:31:17.348Z] =================================================================================================================== 00:23:30.866 [2024-11-20T15:31:17.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.866 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:30.866 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:30.866 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98322' 00:23:30.866 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98322 00:23:30.866 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98322 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98612 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98612 /var/tmp/bdevperf.sock 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98612 ']' 00:23:31.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.125 15:31:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:31.125 [2024-11-20 15:31:17.440780] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:23:31.125 [2024-11-20 15:31:17.440859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98612 ] 00:23:31.125 [2024-11-20 15:31:17.593415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.384 [2024-11-20 15:31:17.656028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.953 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.953 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:31.953 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98612 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:31.953 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98640 00:23:31.953 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:32.212 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:32.471 NVMe0n1 00:23:32.471 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98694 00:23:32.471 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:32.471 15:31:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:32.471 Running I/O for 10 seconds... 00:23:33.407 15:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:33.671 23384.00 IOPS, 91.34 MiB/s [2024-11-20T15:31:20.153Z] [2024-11-20 15:31:20.054191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.671 [2024-11-20 15:31:20.054995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80b8f0 is same with the state(6) to be set 00:23:33.672 [2024-11-20 15:31:20.055800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.055834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.055857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.055867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.055878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.055888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.055898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.055907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.055918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.055926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.055938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.055947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.055957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.055966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.055976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.055984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.055995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.056003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.056014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.056023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.672 [2024-11-20 15:31:20.056033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.672 [2024-11-20 15:31:20.056041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.673 [2024-11-20 15:31:20.056832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.673 [2024-11-20 15:31:20.056840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.056850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.056859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.056872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.056880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.056891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.056899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.056909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.056918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.056932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.056940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.056950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.056959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.056970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.056978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.056988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.056997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.674 [2024-11-20 15:31:20.057582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.674 [2024-11-20 15:31:20.057592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.057987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.057997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.675 [2024-11-20 15:31:20.058155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.675 [2024-11-20 15:31:20.058165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.676 [2024-11-20 15:31:20.058174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.676 [2024-11-20 15:31:20.058192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.676 [2024-11-20 15:31:20.058210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.676 [2024-11-20 15:31:20.058228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.676 [2024-11-20 15:31:20.058247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.676 [2024-11-20 15:31:20.058266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.676 [2024-11-20 15:31:20.058285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.676 [2024-11-20 15:31:20.058311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251d780 is same with the state(6) to be set 00:23:33.676 [2024-11-20 15:31:20.058332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.676 [2024-11-20 15:31:20.058339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.676 [2024-11-20 15:31:20.058346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113984 len:8 PRP1 0x0 PRP2 0x0 00:23:33.676 [2024-11-20 15:31:20.058355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.676 [2024-11-20 15:31:20.058635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:33.676 [2024-11-20 15:31:20.058718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b1f50 (9): Bad file descriptor 00:23:33.676 [2024-11-20 15:31:20.058823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.676 [2024-11-20 15:31:20.058838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b1f50 with addr=10.0.0.3, port=4420 00:23:33.676 [2024-11-20 15:31:20.058848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f50 is same with the state(6) to be set 00:23:33.676 [2024-11-20 15:31:20.058863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b1f50 (9): Bad file descriptor 00:23:33.676 [2024-11-20 15:31:20.058876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:33.676 [2024-11-20 15:31:20.058886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:33.676 [2024-11-20 15:31:20.058898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:33.676 [2024-11-20 15:31:20.058907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:33.676 [2024-11-20 15:31:20.058919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:33.676 15:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98694 00:23:35.552 13002.50 IOPS, 50.79 MiB/s [2024-11-20T15:31:22.319Z] 8668.33 IOPS, 33.86 MiB/s [2024-11-20T15:31:22.319Z] [2024-11-20 15:31:22.055846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.837 [2024-11-20 15:31:22.055895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b1f50 with addr=10.0.0.3, port=4420 00:23:35.837 [2024-11-20 15:31:22.055926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f50 is same with the state(6) to be set 00:23:35.837 [2024-11-20 15:31:22.055949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b1f50 (9): Bad file descriptor 00:23:35.837 [2024-11-20 15:31:22.055968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:35.837 [2024-11-20 15:31:22.055977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:35.837 [2024-11-20 15:31:22.055989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:35.837 [2024-11-20 15:31:22.056001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:35.837 [2024-11-20 15:31:22.056014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:37.841 6501.25 IOPS, 25.40 MiB/s [2024-11-20T15:31:24.323Z] 5201.00 IOPS, 20.32 MiB/s [2024-11-20T15:31:24.323Z] [2024-11-20 15:31:24.052940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.841 [2024-11-20 15:31:24.053122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b1f50 with addr=10.0.0.3, port=4420 00:23:37.842 [2024-11-20 15:31:24.053145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f50 is same with the state(6) to be set 00:23:37.842 [2024-11-20 15:31:24.053178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b1f50 (9): Bad file descriptor 00:23:37.842 [2024-11-20 15:31:24.053197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:37.842 [2024-11-20 15:31:24.053207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:37.842 [2024-11-20 15:31:24.053219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:37.842 [2024-11-20 15:31:24.053232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:37.842 [2024-11-20 15:31:24.053245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:39.719 4334.17 IOPS, 16.93 MiB/s [2024-11-20T15:31:26.201Z] 3715.00 IOPS, 14.51 MiB/s [2024-11-20T15:31:26.201Z] [2024-11-20 15:31:26.050063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:39.719 [2024-11-20 15:31:26.050232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:39.719 [2024-11-20 15:31:26.050250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:39.719 [2024-11-20 15:31:26.050261] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:39.719 [2024-11-20 15:31:26.050276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:40.655 3250.62 IOPS, 12.70 MiB/s 00:23:40.655 Latency(us) 00:23:40.655 [2024-11-20T15:31:27.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.655 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:40.655 NVMe0n1 : 8.12 3203.08 12.51 15.77 0.00 39641.74 2960.96 7061253.96 00:23:40.655 [2024-11-20T15:31:27.137Z] =================================================================================================================== 00:23:40.655 [2024-11-20T15:31:27.137Z] Total : 3203.08 12.51 15.77 0.00 39641.74 2960.96 7061253.96 00:23:40.655 { 00:23:40.655 "results": [ 00:23:40.655 { 00:23:40.655 "job": "NVMe0n1", 00:23:40.655 "core_mask": "0x4", 00:23:40.655 "workload": "randread", 00:23:40.655 "status": "finished", 00:23:40.655 "queue_depth": 128, 00:23:40.655 "io_size": 4096, 00:23:40.655 "runtime": 8.118754, 00:23:40.655 "iops": 3203.0777136491633, 00:23:40.655 "mibps": 12.512022318942044, 00:23:40.655 "io_failed": 128, 00:23:40.655 "io_timeout": 0, 00:23:40.655 "avg_latency_us": 39641.7406755096, 00:23:40.655 "min_latency_us": 2960.9638554216867, 00:23:40.655 "max_latency_us": 7061253.963052209 00:23:40.655 } 00:23:40.655 ], 00:23:40.655 "core_count": 1 00:23:40.655 } 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.655 Attaching 5 probes... 00:23:40.655 1182.092413: reset bdev controller NVMe0 00:23:40.655 1182.221733: reconnect bdev controller NVMe0 00:23:40.655 3179.194535: reconnect delay bdev controller NVMe0 00:23:40.655 3179.212725: reconnect bdev controller NVMe0 00:23:40.655 5176.299286: reconnect delay bdev controller NVMe0 00:23:40.655 5176.315747: reconnect bdev controller NVMe0 00:23:40.655 7173.512069: reconnect delay bdev controller NVMe0 00:23:40.655 7173.527819: reconnect bdev controller NVMe0 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98640 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98612 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98612 ']' 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98612 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.655 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98612 00:23:40.913 killing process with pid 98612 00:23:40.913 Received shutdown signal, test time was about 8.212981 seconds 00:23:40.913 00:23:40.913 Latency(us) 00:23:40.913 [2024-11-20T15:31:27.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.913 [2024-11-20T15:31:27.395Z] =================================================================================================================== 00:23:40.913 [2024-11-20T15:31:27.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.913 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:40.913 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:40.913 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98612' 00:23:40.913 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98612 00:23:40.913 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98612 00:23:41.171 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.171 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:41.171 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:41.171 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.171 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:41.430 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.430 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:41.430 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.430 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.430 rmmod nvme_tcp 00:23:41.430 rmmod nvme_fabrics 00:23:41.430 rmmod nvme_keyring 00:23:41.430 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 98030 ']' 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 98030 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98030 ']' 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98030 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98030 00:23:41.431 killing process with pid 98030 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98030' 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98030 00:23:41.431 15:31:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98030 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:41.690 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:41.949 ************************************ 00:23:41.949 END TEST nvmf_timeout 00:23:41.949 ************************************ 00:23:41.949 00:23:41.949 real 0m46.990s 00:23:41.949 user 2m14.890s 00:23:41.949 sys 0m6.596s 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.949 15:31:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.208 15:31:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:42.208 15:31:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:42.208 ************************************ 00:23:42.208 END TEST nvmf_host 00:23:42.208 ************************************ 00:23:42.208 00:23:42.209 real 5m49.822s 00:23:42.209 user 14m23.000s 00:23:42.209 sys 1m26.789s 00:23:42.209 15:31:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.209 15:31:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.209 15:31:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:42.209 15:31:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:23:42.209 15:31:28 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:42.209 15:31:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:42.209 15:31:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.209 15:31:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.209 ************************************ 00:23:42.209 START TEST nvmf_target_core_interrupt_mode 00:23:42.209 ************************************ 00:23:42.209 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:42.209 * Looking for test storage... 00:23:42.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:23:42.209 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.209 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.209 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.469 --rc genhtml_branch_coverage=1 00:23:42.469 --rc genhtml_function_coverage=1 00:23:42.469 --rc genhtml_legend=1 00:23:42.469 --rc geninfo_all_blocks=1 00:23:42.469 --rc geninfo_unexecuted_blocks=1 00:23:42.469 00:23:42.469 ' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.469 --rc genhtml_branch_coverage=1 00:23:42.469 --rc genhtml_function_coverage=1 00:23:42.469 --rc genhtml_legend=1 00:23:42.469 --rc geninfo_all_blocks=1 00:23:42.469 --rc geninfo_unexecuted_blocks=1 00:23:42.469 00:23:42.469 ' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.469 --rc genhtml_branch_coverage=1 00:23:42.469 --rc genhtml_function_coverage=1 00:23:42.469 --rc genhtml_legend=1 00:23:42.469 --rc geninfo_all_blocks=1 00:23:42.469 --rc geninfo_unexecuted_blocks=1 00:23:42.469 00:23:42.469 ' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.469 --rc genhtml_branch_coverage=1 00:23:42.469 --rc genhtml_function_coverage=1 00:23:42.469 --rc genhtml_legend=1 00:23:42.469 --rc geninfo_all_blocks=1 00:23:42.469 --rc geninfo_unexecuted_blocks=1 00:23:42.469 00:23:42.469 ' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:23:42.469 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:42.470 ************************************ 00:23:42.470 START TEST nvmf_abort 00:23:42.470 ************************************ 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:42.470 * Looking for test storage... 00:23:42.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:42.470 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.730 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.730 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.730 --rc genhtml_branch_coverage=1 00:23:42.730 --rc genhtml_function_coverage=1 00:23:42.730 --rc genhtml_legend=1 00:23:42.730 --rc geninfo_all_blocks=1 00:23:42.730 --rc geninfo_unexecuted_blocks=1 00:23:42.730 00:23:42.730 ' 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.730 --rc genhtml_branch_coverage=1 00:23:42.730 --rc genhtml_function_coverage=1 00:23:42.730 --rc genhtml_legend=1 00:23:42.730 --rc geninfo_all_blocks=1 00:23:42.730 --rc geninfo_unexecuted_blocks=1 00:23:42.730 00:23:42.730 ' 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.730 --rc genhtml_branch_coverage=1 00:23:42.730 --rc genhtml_function_coverage=1 00:23:42.730 --rc genhtml_legend=1 00:23:42.730 --rc geninfo_all_blocks=1 00:23:42.730 --rc geninfo_unexecuted_blocks=1 00:23:42.730 00:23:42.730 ' 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.730 --rc genhtml_branch_coverage=1 00:23:42.730 --rc genhtml_function_coverage=1 00:23:42.730 --rc genhtml_legend=1 00:23:42.730 --rc geninfo_all_blocks=1 00:23:42.730 --rc geninfo_unexecuted_blocks=1 00:23:42.730 00:23:42.730 ' 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:23:42.730 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:42.731 Cannot find device "nvmf_init_br" 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:42.731 Cannot find device "nvmf_init_br2" 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:42.731 Cannot find device "nvmf_tgt_br" 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:42.731 Cannot find device "nvmf_tgt_br2" 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:42.731 Cannot find device "nvmf_init_br" 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:23:42.731 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:42.990 Cannot find device "nvmf_init_br2" 00:23:42.990 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:23:42.990 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:42.990 Cannot find device "nvmf_tgt_br" 00:23:42.990 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:23:42.990 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:42.990 Cannot find device "nvmf_tgt_br2" 00:23:42.990 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:23:42.990 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:42.990 Cannot find device "nvmf_br" 00:23:42.990 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:23:42.990 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:42.990 Cannot find device "nvmf_init_if" 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:42.991 Cannot find device "nvmf_init_if2" 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:42.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:42.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:42.991 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:43.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:23:43.250 00:23:43.250 --- 10.0.0.3 ping statistics --- 00:23:43.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.250 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:43.250 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:43.250 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:23:43.250 00:23:43.250 --- 10.0.0.4 ping statistics --- 00:23:43.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.250 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:23:43.250 00:23:43.250 --- 10.0.0.1 ping statistics --- 00:23:43.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.250 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:43.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:23:43.250 00:23:43.250 --- 10.0.0.2 ping statistics --- 00:23:43.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.250 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:23:43.250 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=99113 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 99113 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 99113 ']' 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.251 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:43.251 [2024-11-20 15:31:29.698004] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:43.251 [2024-11-20 15:31:29.699196] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:23:43.251 [2024-11-20 15:31:29.699250] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.510 [2024-11-20 15:31:29.849681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:43.510 [2024-11-20 15:31:29.923324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.510 [2024-11-20 15:31:29.923514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.510 [2024-11-20 15:31:29.923670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.510 [2024-11-20 15:31:29.923730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.510 [2024-11-20 15:31:29.923757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.510 [2024-11-20 15:31:29.925139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.510 [2024-11-20 15:31:29.925247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.510 [2024-11-20 15:31:29.925249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.769 [2024-11-20 15:31:30.053021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:43.769 [2024-11-20 15:31:30.053535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:43.769 [2024-11-20 15:31:30.053583] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:43.769 [2024-11-20 15:31:30.054869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:44.338 [2024-11-20 15:31:30.654543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:44.338 Malloc0 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:44.338 Delay0 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:44.338 [2024-11-20 15:31:30.766495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.338 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:23:44.598 [2024-11-20 15:31:30.968560] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:47.135 Initializing NVMe Controllers 00:23:47.135 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:23:47.135 controller IO queue size 128 less than required 00:23:47.135 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:23:47.135 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:47.135 Initialization complete. Launching workers. 00:23:47.135 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37345 00:23:47.135 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37402, failed to submit 66 00:23:47.135 success 37345, unsuccessful 57, failed 0 00:23:47.135 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:47.135 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.135 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.135 rmmod nvme_tcp 00:23:47.135 rmmod nvme_fabrics 00:23:47.135 rmmod nvme_keyring 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 99113 ']' 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 99113 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 99113 ']' 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 99113 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99113 00:23:47.135 killing process with pid 99113 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99113' 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 99113 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 99113 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:47.135 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:23:47.393 00:23:47.393 real 0m4.938s 00:23:47.393 user 0m9.121s 00:23:47.393 sys 0m1.986s 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:47.393 ************************************ 00:23:47.393 END TEST nvmf_abort 00:23:47.393 ************************************ 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.393 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:47.393 ************************************ 00:23:47.394 START TEST nvmf_ns_hotplug_stress 00:23:47.394 ************************************ 00:23:47.394 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:47.653 * Looking for test storage... 00:23:47.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:47.653 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:47.653 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:23:47.653 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:23:47.653 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:47.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.654 --rc genhtml_branch_coverage=1 00:23:47.654 --rc genhtml_function_coverage=1 00:23:47.654 --rc genhtml_legend=1 00:23:47.654 --rc geninfo_all_blocks=1 00:23:47.654 --rc geninfo_unexecuted_blocks=1 00:23:47.654 00:23:47.654 ' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:47.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.654 --rc genhtml_branch_coverage=1 00:23:47.654 --rc genhtml_function_coverage=1 00:23:47.654 --rc genhtml_legend=1 00:23:47.654 --rc geninfo_all_blocks=1 00:23:47.654 --rc geninfo_unexecuted_blocks=1 00:23:47.654 00:23:47.654 ' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:47.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.654 --rc genhtml_branch_coverage=1 00:23:47.654 --rc genhtml_function_coverage=1 00:23:47.654 --rc genhtml_legend=1 00:23:47.654 --rc geninfo_all_blocks=1 00:23:47.654 --rc geninfo_unexecuted_blocks=1 00:23:47.654 00:23:47.654 ' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:47.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.654 --rc genhtml_branch_coverage=1 00:23:47.654 --rc genhtml_function_coverage=1 00:23:47.654 --rc genhtml_legend=1 00:23:47.654 --rc geninfo_all_blocks=1 00:23:47.654 --rc geninfo_unexecuted_blocks=1 00:23:47.654 00:23:47.654 ' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.654 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:47.655 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:47.915 Cannot find device "nvmf_init_br" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:47.915 Cannot find device "nvmf_init_br2" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:47.915 Cannot find device "nvmf_tgt_br" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:47.915 Cannot find device "nvmf_tgt_br2" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:47.915 Cannot find device "nvmf_init_br" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:47.915 Cannot find device "nvmf_init_br2" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:47.915 Cannot find device "nvmf_tgt_br" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:47.915 Cannot find device "nvmf_tgt_br2" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:47.915 Cannot find device "nvmf_br" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:47.915 Cannot find device "nvmf_init_if" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:47.915 Cannot find device "nvmf_init_if2" 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:47.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:47.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:47.915 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:48.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:48.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:23:48.175 00:23:48.175 --- 10.0.0.3 ping statistics --- 00:23:48.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.175 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:48.175 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:48.175 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:23:48.175 00:23:48.175 --- 10.0.0.4 ping statistics --- 00:23:48.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.175 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:48.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:23:48.175 00:23:48.175 --- 10.0.0.1 ping statistics --- 00:23:48.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.175 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:48.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:23:48.175 00:23:48.175 --- 10.0.0.2 ping statistics --- 00:23:48.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.175 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=99434 00:23:48.175 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:48.176 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 99434 00:23:48.176 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 99434 ']' 00:23:48.176 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.176 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.176 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.176 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.176 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:48.435 [2024-11-20 15:31:34.709186] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:48.435 [2024-11-20 15:31:34.710202] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:23:48.435 [2024-11-20 15:31:34.710259] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.435 [2024-11-20 15:31:34.845226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:48.435 [2024-11-20 15:31:34.912266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.435 [2024-11-20 15:31:34.912324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.435 [2024-11-20 15:31:34.912334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.435 [2024-11-20 15:31:34.912342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.435 [2024-11-20 15:31:34.912364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.435 [2024-11-20 15:31:34.913756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.435 [2024-11-20 15:31:34.913817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.435 [2024-11-20 15:31:34.913819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.693 [2024-11-20 15:31:35.038670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:48.693 [2024-11-20 15:31:35.039514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:48.693 [2024-11-20 15:31:35.039660] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:48.693 [2024-11-20 15:31:35.040514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:49.262 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.262 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:23:49.262 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:49.262 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:49.262 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:49.262 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.262 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:23:49.262 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:49.563 [2024-11-20 15:31:35.856161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.563 15:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:49.849 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:49.850 [2024-11-20 15:31:36.311643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:49.850 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:50.109 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:23:50.368 Malloc0 00:23:50.368 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:50.627 Delay0 00:23:50.627 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:50.886 15:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:23:51.145 NULL1 00:23:51.145 15:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:23:51.145 15:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:23:51.145 15:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=99564 00:23:51.145 15:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:23:51.145 15:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:52.522 Read completed with error (sct=0, sc=11) 00:23:52.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:52.522 15:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:52.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:52.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:52.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:52.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:52.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:52.783 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:52.783 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:23:52.783 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:23:53.045 true 00:23:53.045 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:23:53.045 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:53.613 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:53.871 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:23:53.871 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:23:54.130 true 00:23:54.130 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:23:54.130 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:54.389 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:54.648 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:23:54.648 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:23:54.648 true 00:23:54.648 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:23:54.648 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:56.027 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:56.027 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:23:56.027 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:23:56.027 true 00:23:56.286 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:23:56.286 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:56.286 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:56.546 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:23:56.546 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:23:56.805 true 00:23:56.805 15:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:23:56.805 15:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:57.741 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:58.000 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:23:58.000 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:23:58.258 true 00:23:58.258 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:23:58.258 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:58.258 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:58.517 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:23:58.517 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:23:58.776 true 00:23:58.776 15:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:23:58.776 15:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:59.713 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:59.973 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:23:59.973 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:24:00.232 true 00:24:00.232 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:00.232 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:00.491 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:00.491 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:24:00.491 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:24:00.750 true 00:24:00.750 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:00.750 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:01.686 15:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:01.945 15:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:24:01.945 15:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:24:02.205 true 00:24:02.205 15:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:02.205 15:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:02.464 15:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:02.723 15:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:24:02.723 15:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:24:02.723 true 00:24:02.723 15:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:02.723 15:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:03.918 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:03.918 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:24:03.918 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:24:04.177 true 00:24:04.177 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:04.177 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:04.437 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:04.696 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:24:04.696 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:24:04.956 true 00:24:04.956 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:04.956 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:05.896 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:06.192 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:24:06.192 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:24:06.192 true 00:24:06.192 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:06.192 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:06.454 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:06.713 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:24:06.713 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:24:06.713 true 00:24:06.973 15:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:06.973 15:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:07.910 15:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:07.910 15:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:24:07.910 15:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:24:08.169 true 00:24:08.169 15:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:08.169 15:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:08.428 15:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:08.687 15:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:24:08.687 15:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:24:08.947 true 00:24:08.947 15:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:08.947 15:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:09.886 15:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:10.145 15:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:24:10.145 15:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:24:10.145 true 00:24:10.145 15:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:10.145 15:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:10.405 15:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:10.663 15:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:24:10.663 15:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:24:10.923 true 00:24:10.923 15:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:10.923 15:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:11.861 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:12.121 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:24:12.121 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:24:12.121 true 00:24:12.380 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:12.380 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:12.380 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:12.639 15:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:24:12.639 15:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:24:12.903 true 00:24:12.903 15:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:12.903 15:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:13.843 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:14.102 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:24:14.102 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:24:14.361 true 00:24:14.361 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:14.361 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:14.361 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:14.620 15:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:24:14.620 15:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:24:14.878 true 00:24:14.878 15:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:14.878 15:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:15.816 15:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:16.075 15:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:24:16.075 15:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:24:16.333 true 00:24:16.333 15:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:16.333 15:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:16.592 15:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:16.592 15:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:24:16.592 15:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:24:16.851 true 00:24:16.851 15:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:16.851 15:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:17.789 15:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:18.048 15:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:24:18.048 15:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:24:18.332 true 00:24:18.332 15:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:18.332 15:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:18.592 15:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:18.852 15:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:24:18.852 15:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:24:18.852 true 00:24:18.852 15:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:18.852 15:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:19.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:19.789 15:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:20.046 15:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:24:20.046 15:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:24:20.305 true 00:24:20.305 15:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:20.305 15:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:20.564 15:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:20.821 15:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:24:20.821 15:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:24:20.821 true 00:24:21.079 15:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:21.079 15:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:22.014 Initializing NVMe Controllers 00:24:22.014 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:22.014 Controller IO queue size 128, less than required. 00:24:22.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.014 Controller IO queue size 128, less than required. 00:24:22.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.014 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:22.014 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:22.014 Initialization complete. Launching workers. 00:24:22.014 ======================================================== 00:24:22.014 Latency(us) 00:24:22.014 Device Information : IOPS MiB/s Average min max 00:24:22.014 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 372.03 0.18 195453.15 3113.34 1021524.81 00:24:22.014 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14920.30 7.29 8578.63 2983.58 441477.98 00:24:22.014 ======================================================== 00:24:22.014 Total : 15292.33 7.47 13124.94 2983.58 1021524.81 00:24:22.014 00:24:22.014 15:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:22.014 15:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:24:22.014 15:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:24:22.272 true 00:24:22.272 15:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99564 00:24:22.272 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (99564) - No such process 00:24:22.272 15:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 99564 00:24:22.272 15:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:22.531 15:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:22.791 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:24:22.791 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:24:22.791 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:24:22.791 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:22.791 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:24:22.791 null0 00:24:22.791 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:22.791 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:22.791 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:24:23.049 null1 00:24:23.049 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:23.049 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:23.049 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:24:23.308 null2 00:24:23.308 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:23.308 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:23.308 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:24:23.567 null3 00:24:23.567 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:23.567 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:23.567 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:24:23.826 null4 00:24:23.826 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:23.826 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:23.826 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:24:23.826 null5 00:24:24.084 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:24.084 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:24.084 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:24:24.084 null6 00:24:24.084 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:24.084 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:24.084 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:24:24.343 null7 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:24:24.343 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 100602 100604 100606 100609 100612 100614 100615 100619 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.344 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:24.603 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:24.603 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:24.603 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:24.603 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:24.603 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:24.603 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:24.603 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:24.603 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.863 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:25.122 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:25.122 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:25.122 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:25.122 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:25.122 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:25.122 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:25.122 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:25.122 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.382 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:25.641 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:25.641 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:25.641 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:25.641 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:25.641 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:25.641 15:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:25.641 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:25.641 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:25.641 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.641 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.641 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.900 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:25.901 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:26.160 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.418 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:26.419 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:26.419 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.419 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.419 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:26.677 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:26.677 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:26.677 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.936 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.195 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.454 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:27.713 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:27.713 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.713 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.713 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:27.713 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.714 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:27.972 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.973 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.973 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:28.231 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:28.231 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:28.232 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.491 15:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:28.749 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:28.749 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:28.749 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:28.749 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.749 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.749 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:28.749 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:28.750 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:28.750 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:28.750 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:28.750 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:28.750 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:29.008 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.009 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.009 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:29.009 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.009 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.267 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.526 rmmod nvme_tcp 00:24:29.526 rmmod nvme_fabrics 00:24:29.526 rmmod nvme_keyring 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 99434 ']' 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 99434 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 99434 ']' 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 99434 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.526 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99434 00:24:29.785 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:29.785 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:29.785 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99434' 00:24:29.785 killing process with pid 99434 00:24:29.785 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 99434 00:24:29.785 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 99434 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:30.044 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:24:30.303 00:24:30.303 real 0m42.788s 00:24:30.303 user 2m54.741s 00:24:30.303 sys 0m25.375s 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:24:30.303 ************************************ 00:24:30.303 END TEST nvmf_ns_hotplug_stress 00:24:30.303 ************************************ 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:30.303 ************************************ 00:24:30.303 START TEST nvmf_delete_subsystem 00:24:30.303 ************************************ 00:24:30.303 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:24:30.569 * Looking for test storage... 00:24:30.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.569 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.569 --rc genhtml_branch_coverage=1 00:24:30.569 --rc genhtml_function_coverage=1 00:24:30.569 --rc genhtml_legend=1 00:24:30.570 --rc geninfo_all_blocks=1 00:24:30.570 --rc geninfo_unexecuted_blocks=1 00:24:30.570 00:24:30.570 ' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:30.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.570 --rc genhtml_branch_coverage=1 00:24:30.570 --rc genhtml_function_coverage=1 00:24:30.570 --rc genhtml_legend=1 00:24:30.570 --rc geninfo_all_blocks=1 00:24:30.570 --rc geninfo_unexecuted_blocks=1 00:24:30.570 00:24:30.570 ' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:30.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.570 --rc genhtml_branch_coverage=1 00:24:30.570 --rc genhtml_function_coverage=1 00:24:30.570 --rc genhtml_legend=1 00:24:30.570 --rc geninfo_all_blocks=1 00:24:30.570 --rc geninfo_unexecuted_blocks=1 00:24:30.570 00:24:30.570 ' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:30.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.570 --rc genhtml_branch_coverage=1 00:24:30.570 --rc genhtml_function_coverage=1 00:24:30.570 --rc genhtml_legend=1 00:24:30.570 --rc geninfo_all_blocks=1 00:24:30.570 --rc geninfo_unexecuted_blocks=1 00:24:30.570 00:24:30.570 ' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.570 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:30.571 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:30.571 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:30.571 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:30.571 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:30.571 Cannot find device "nvmf_init_br" 00:24:30.571 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:24:30.571 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:30.571 Cannot find device "nvmf_init_br2" 00:24:30.571 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:24:30.571 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:30.571 Cannot find device "nvmf_tgt_br" 00:24:30.571 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:24:30.571 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:30.571 Cannot find device "nvmf_tgt_br2" 00:24:30.571 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:24:30.571 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:30.860 Cannot find device "nvmf_init_br" 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:30.860 Cannot find device "nvmf_init_br2" 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:30.860 Cannot find device "nvmf_tgt_br" 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:30.860 Cannot find device "nvmf_tgt_br2" 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:30.860 Cannot find device "nvmf_br" 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:30.860 Cannot find device "nvmf_init_if" 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:30.860 Cannot find device "nvmf_init_if2" 00:24:30.860 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:30.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:30.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:30.861 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:31.147 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:31.147 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:31.147 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:31.147 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:31.147 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:31.147 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:31.147 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:31.147 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:31.148 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:31.148 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:31.148 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:31.148 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:31.148 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:31.148 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:31.148 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:31.148 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:24:31.148 00:24:31.148 --- 10.0.0.3 ping statistics --- 00:24:31.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.148 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:31.148 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:31.148 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:31.148 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:24:31.148 00:24:31.148 --- 10.0.0.4 ping statistics --- 00:24:31.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.148 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:24:31.148 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:31.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:24:31.149 00:24:31.149 --- 10.0.0.1 ping statistics --- 00:24:31.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.149 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:31.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:24:31.149 00:24:31.149 --- 10.0.0.2 ping statistics --- 00:24:31.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.149 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.149 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=102018 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 102018 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 102018 ']' 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.150 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:31.150 [2024-11-20 15:32:17.564462] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:31.150 [2024-11-20 15:32:17.565449] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:24:31.150 [2024-11-20 15:32:17.565499] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.413 [2024-11-20 15:32:17.702403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:31.413 [2024-11-20 15:32:17.764456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.413 [2024-11-20 15:32:17.764507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.413 [2024-11-20 15:32:17.764534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.413 [2024-11-20 15:32:17.764544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.413 [2024-11-20 15:32:17.764551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.413 [2024-11-20 15:32:17.765863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.413 [2024-11-20 15:32:17.765865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.413 [2024-11-20 15:32:17.890582] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:31.413 [2024-11-20 15:32:17.891144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:31.413 [2024-11-20 15:32:17.891419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:31.980 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.980 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:24:31.980 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.980 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.980 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:32.239 [2024-11-20 15:32:18.522696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:32.239 [2024-11-20 15:32:18.550961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:32.239 NULL1 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:32.239 Delay0 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=102069 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:24:32.239 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:24:32.498 [2024-11-20 15:32:18.776424] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:34.402 15:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.402 15:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.402 15:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Write completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Write completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Write completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Write completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Write completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Write completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 starting I/O failed: -6 00:24:34.402 Write completed with error (sct=0, sc=8) 00:24:34.402 Read completed with error (sct=0, sc=8) 00:24:34.402 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 [2024-11-20 15:32:20.799489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212e7e0 is same with the state(6) to be set 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 [2024-11-20 15:32:20.800565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212dc30 is same with the state(6) to be set 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 starting I/O failed: -6 00:24:34.403 [2024-11-20 15:32:20.804031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1330000c40 is same with the state(6) to be set 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Write completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:34.403 Read completed with error (sct=0, sc=8) 00:24:35.341 [2024-11-20 15:32:21.788784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(6) to be set 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 [2024-11-20 15:32:21.799138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212da50 is same with the state(6) to be set 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 [2024-11-20 15:32:21.799526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2130ea0 is same with the state(6) to be set 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 [2024-11-20 15:32:21.801109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f133000d020 is same with the state(6) to be set 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 Read completed with error (sct=0, sc=8) 00:24:35.341 Write completed with error (sct=0, sc=8) 00:24:35.341 [2024-11-20 15:32:21.801424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f133000d800 is same with the state(6) to be set 00:24:35.341 Initializing NVMe Controllers 00:24:35.341 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.341 Controller IO queue size 128, less than required. 00:24:35.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:35.341 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:35.341 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:35.341 Initialization complete. Launching workers. 00:24:35.341 ======================================================== 00:24:35.342 Latency(us) 00:24:35.342 Device Information : IOPS MiB/s Average min max 00:24:35.342 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.40 0.08 901220.50 518.26 1006299.48 00:24:35.342 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.96 0.07 1057041.13 290.80 2002000.45 00:24:35.342 ======================================================== 00:24:35.342 Total : 317.36 0.15 975339.26 290.80 2002000.45 00:24:35.342 00:24:35.342 [2024-11-20 15:32:21.802714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2129ee0 (9): Bad file descriptor 00:24:35.342 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:35.342 15:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.342 15:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:24:35.342 15:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 102069 00:24:35.342 15:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 102069 00:24:35.911 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (102069) - No such process 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 102069 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 102069 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 102069 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:35.911 [2024-11-20 15:32:22.335478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=102109 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102109 00:24:35.911 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:36.170 [2024-11-20 15:32:22.527674] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:36.429 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:36.429 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102109 00:24:36.429 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:36.998 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:36.998 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102109 00:24:36.998 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:37.566 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:37.566 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102109 00:24:37.566 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:38.135 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:38.135 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102109 00:24:38.135 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:38.704 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:38.704 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102109 00:24:38.704 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:38.963 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:38.963 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102109 00:24:38.963 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:39.222 Initializing NVMe Controllers 00:24:39.222 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.222 Controller IO queue size 128, less than required. 00:24:39.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:39.222 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:39.222 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:39.222 Initialization complete. Launching workers. 00:24:39.222 ======================================================== 00:24:39.222 Latency(us) 00:24:39.222 Device Information : IOPS MiB/s Average min max 00:24:39.222 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002292.06 1000144.02 1006369.82 00:24:39.222 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004039.11 1000156.69 1041819.31 00:24:39.222 ======================================================== 00:24:39.222 Total : 256.00 0.12 1003165.59 1000144.02 1041819.31 00:24:39.222 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102109 00:24:39.482 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (102109) - No such process 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 102109 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.482 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.742 rmmod nvme_tcp 00:24:39.742 rmmod nvme_fabrics 00:24:39.742 rmmod nvme_keyring 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 102018 ']' 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 102018 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 102018 ']' 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 102018 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102018 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.742 killing process with pid 102018 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102018' 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 102018 00:24:39.742 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 102018 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:40.001 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:24:40.260 00:24:40.260 real 0m9.909s 00:24:40.260 user 0m23.433s 00:24:40.260 sys 0m3.676s 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.260 ************************************ 00:24:40.260 END TEST nvmf_delete_subsystem 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:40.260 ************************************ 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:40.260 ************************************ 00:24:40.260 START TEST nvmf_host_management 00:24:40.260 ************************************ 00:24:40.260 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:40.521 * Looking for test storage... 00:24:40.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:40.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.521 --rc genhtml_branch_coverage=1 00:24:40.521 --rc genhtml_function_coverage=1 00:24:40.521 --rc genhtml_legend=1 00:24:40.521 --rc geninfo_all_blocks=1 00:24:40.521 --rc geninfo_unexecuted_blocks=1 00:24:40.521 00:24:40.521 ' 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:40.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.521 --rc genhtml_branch_coverage=1 00:24:40.521 --rc genhtml_function_coverage=1 00:24:40.521 --rc genhtml_legend=1 00:24:40.521 --rc geninfo_all_blocks=1 00:24:40.521 --rc geninfo_unexecuted_blocks=1 00:24:40.521 00:24:40.521 ' 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:40.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.521 --rc genhtml_branch_coverage=1 00:24:40.521 --rc genhtml_function_coverage=1 00:24:40.521 --rc genhtml_legend=1 00:24:40.521 --rc geninfo_all_blocks=1 00:24:40.521 --rc geninfo_unexecuted_blocks=1 00:24:40.521 00:24:40.521 ' 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:40.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.521 --rc genhtml_branch_coverage=1 00:24:40.521 --rc genhtml_function_coverage=1 00:24:40.521 --rc genhtml_legend=1 00:24:40.521 --rc geninfo_all_blocks=1 00:24:40.521 --rc geninfo_unexecuted_blocks=1 00:24:40.521 00:24:40.521 ' 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.521 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:40.522 Cannot find device "nvmf_init_br" 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:40.522 Cannot find device "nvmf_init_br2" 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:40.522 Cannot find device "nvmf_tgt_br" 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:40.522 Cannot find device "nvmf_tgt_br2" 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:24:40.522 15:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:40.783 Cannot find device "nvmf_init_br" 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:40.783 Cannot find device "nvmf_init_br2" 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:40.783 Cannot find device "nvmf_tgt_br" 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:40.783 Cannot find device "nvmf_tgt_br2" 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:40.783 Cannot find device "nvmf_br" 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:40.783 Cannot find device "nvmf_init_if" 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:40.783 Cannot find device "nvmf_init_if2" 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:40.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:40.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:40.783 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:41.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:41.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:24:41.043 00:24:41.043 --- 10.0.0.3 ping statistics --- 00:24:41.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.043 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:41.043 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:41.043 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:24:41.043 00:24:41.043 --- 10.0.0.4 ping statistics --- 00:24:41.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.043 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:41.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:24:41.043 00:24:41.043 --- 10.0.0.1 ping statistics --- 00:24:41.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.043 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:41.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:24:41.043 00:24:41.043 --- 10.0.0.2 ping statistics --- 00:24:41.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.043 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=102398 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 102398 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 102398 ']' 00:24:41.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.043 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:41.043 [2024-11-20 15:32:27.498163] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:41.043 [2024-11-20 15:32:27.499149] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:24:41.043 [2024-11-20 15:32:27.499194] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.304 [2024-11-20 15:32:27.649862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:41.304 [2024-11-20 15:32:27.717867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.304 [2024-11-20 15:32:27.717920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.304 [2024-11-20 15:32:27.717930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.304 [2024-11-20 15:32:27.717939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.304 [2024-11-20 15:32:27.717946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.304 [2024-11-20 15:32:27.719795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.304 [2024-11-20 15:32:27.719888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:41.304 [2024-11-20 15:32:27.720069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:41.304 [2024-11-20 15:32:27.720067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.612 [2024-11-20 15:32:27.845435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:41.612 [2024-11-20 15:32:27.845711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:41.612 [2024-11-20 15:32:27.846345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:41.612 [2024-11-20 15:32:27.846483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:41.612 [2024-11-20 15:32:27.847450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:42.181 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.181 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:42.181 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.181 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.181 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:42.182 [2024-11-20 15:32:28.453860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:42.182 Malloc0 00:24:42.182 [2024-11-20 15:32:28.553801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:42.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=102476 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 102476 /var/tmp/bdevperf.sock 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 102476 ']' 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:42.182 { 00:24:42.182 "params": { 00:24:42.182 "name": "Nvme$subsystem", 00:24:42.182 "trtype": "$TEST_TRANSPORT", 00:24:42.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.182 "adrfam": "ipv4", 00:24:42.182 "trsvcid": "$NVMF_PORT", 00:24:42.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.182 "hdgst": ${hdgst:-false}, 00:24:42.182 "ddgst": ${ddgst:-false} 00:24:42.182 }, 00:24:42.182 "method": "bdev_nvme_attach_controller" 00:24:42.182 } 00:24:42.182 EOF 00:24:42.182 )") 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:24:42.182 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:42.182 "params": { 00:24:42.182 "name": "Nvme0", 00:24:42.182 "trtype": "tcp", 00:24:42.182 "traddr": "10.0.0.3", 00:24:42.182 "adrfam": "ipv4", 00:24:42.182 "trsvcid": "4420", 00:24:42.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:42.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:42.182 "hdgst": false, 00:24:42.182 "ddgst": false 00:24:42.182 }, 00:24:42.182 "method": "bdev_nvme_attach_controller" 00:24:42.182 }' 00:24:42.442 [2024-11-20 15:32:28.680156] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:24:42.442 [2024-11-20 15:32:28.680228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102476 ] 00:24:42.442 [2024-11-20 15:32:28.832206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.442 [2024-11-20 15:32:28.897292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.701 Running I/O for 10 seconds... 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.269 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:43.269 [2024-11-20 15:32:29.673862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.269 [2024-11-20 15:32:29.673908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.269 [2024-11-20 15:32:29.673929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.269 [2024-11-20 15:32:29.673940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.269 [2024-11-20 15:32:29.673951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.269 [2024-11-20 15:32:29.673960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.269 [2024-11-20 15:32:29.673971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.269 [2024-11-20 15:32:29.673980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.269 [2024-11-20 15:32:29.673990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.269 [2024-11-20 15:32:29.673999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.269 [2024-11-20 15:32:29.674009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.270 [2024-11-20 15:32:29.674750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.270 [2024-11-20 15:32:29.674760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.674983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.674991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.675001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.675009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.675019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.675028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.675037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.675046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.675056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.675067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.675077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.675085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.675096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.675104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.675114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.271 [2024-11-20 15:32:29.675123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 task offset: 19456 on job bdev=Nvme0n1 fails 00:24:43.271 00:24:43.271 Latency(us) 00:24:43.271 [2024-11-20T15:32:29.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.271 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:43.271 Job: Nvme0n1 ended in about 0.57 seconds with error 00:24:43.271 Verification LBA range: start 0x0 length 0x400 00:24:43.271 Nvme0n1 : 0.57 2026.17 126.64 112.57 0.00 29285.13 2000.30 27161.91 00:24:43.271 [2024-11-20T15:32:29.753Z] =================================================================================================================== 00:24:43.271 [2024-11-20T15:32:29.753Z] Total : 2026.17 126.64 112.57 0.00 29285.13 2000.30 27161.91 00:24:43.271 [2024-11-20 15:32:29.676130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:43.271 [2024-11-20 15:32:29.677849] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:43.271 [2024-11-20 15:32:29.677869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b660 (9): Bad file descriptor 00:24:43.271 [2024-11-20 15:32:29.678792] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:24:43.271 [2024-11-20 15:32:29.678862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:43.271 [2024-11-20 15:32:29.678885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.271 [2024-11-20 15:32:29.678902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:24:43.271 [2024-11-20 15:32:29.678912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:24:43.271 [2024-11-20 15:32:29.678921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.271 [2024-11-20 15:32:29.678930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d2b660 00:24:43.271 [2024-11-20 15:32:29.678957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b660 (9): Bad file descriptor 00:24:43.271 [2024-11-20 15:32:29.678972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:43.271 [2024-11-20 15:32:29.678982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:43.271 [2024-11-20 15:32:29.678994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:43.271 [2024-11-20 15:32:29.679005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:43.271 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.271 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:43.271 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.271 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:43.271 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.271 15:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:24:44.651 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 102476 00:24:44.651 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (102476) - No such process 00:24:44.651 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:24:44.651 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:24:44.651 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:44.651 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:24:44.651 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:24:44.652 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:24:44.652 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:44.652 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:44.652 { 00:24:44.652 "params": { 00:24:44.652 "name": "Nvme$subsystem", 00:24:44.652 "trtype": "$TEST_TRANSPORT", 00:24:44.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.652 "adrfam": "ipv4", 00:24:44.652 "trsvcid": "$NVMF_PORT", 00:24:44.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.652 "hdgst": ${hdgst:-false}, 00:24:44.652 "ddgst": ${ddgst:-false} 00:24:44.652 }, 00:24:44.652 "method": "bdev_nvme_attach_controller" 00:24:44.652 } 00:24:44.652 EOF 00:24:44.652 )") 00:24:44.652 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:24:44.652 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:24:44.652 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:24:44.652 15:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:44.652 "params": { 00:24:44.652 "name": "Nvme0", 00:24:44.652 "trtype": "tcp", 00:24:44.652 "traddr": "10.0.0.3", 00:24:44.652 "adrfam": "ipv4", 00:24:44.652 "trsvcid": "4420", 00:24:44.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:44.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:44.652 "hdgst": false, 00:24:44.652 "ddgst": false 00:24:44.652 }, 00:24:44.652 "method": "bdev_nvme_attach_controller" 00:24:44.652 }' 00:24:44.652 [2024-11-20 15:32:30.761630] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:24:44.652 [2024-11-20 15:32:30.761715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102526 ] 00:24:44.652 [2024-11-20 15:32:30.914252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.652 [2024-11-20 15:32:30.974080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.911 Running I/O for 1 seconds... 00:24:45.852 2083.00 IOPS, 130.19 MiB/s 00:24:45.852 Latency(us) 00:24:45.852 [2024-11-20T15:32:32.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.852 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.852 Verification LBA range: start 0x0 length 0x400 00:24:45.852 Nvme0n1 : 1.03 2115.06 132.19 0.00 0.00 29786.38 4684.90 27161.91 00:24:45.852 [2024-11-20T15:32:32.334Z] =================================================================================================================== 00:24:45.852 [2024-11-20T15:32:32.334Z] Total : 2115.06 132.19 0.00 0.00 29786.38 4684.90 27161.91 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.111 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.111 rmmod nvme_tcp 00:24:46.111 rmmod nvme_fabrics 00:24:46.370 rmmod nvme_keyring 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 102398 ']' 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 102398 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 102398 ']' 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 102398 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102398 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102398' 00:24:46.370 killing process with pid 102398 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 102398 00:24:46.370 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 102398 00:24:46.630 [2024-11-20 15:32:32.928043] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.630 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:46.631 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:46.631 15:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:46.631 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:46.631 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:46.631 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:46.631 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:46.631 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:46.631 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:46.631 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:46.890 00:24:46.890 real 0m6.622s 00:24:46.890 user 0m18.145s 00:24:46.890 sys 0m3.708s 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:46.890 ************************************ 00:24:46.890 END TEST nvmf_host_management 00:24:46.890 ************************************ 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:46.890 ************************************ 00:24:46.890 START TEST nvmf_lvol 00:24:46.890 ************************************ 00:24:46.890 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:47.150 * Looking for test storage... 00:24:47.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:24:47.150 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:47.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.151 --rc genhtml_branch_coverage=1 00:24:47.151 --rc genhtml_function_coverage=1 00:24:47.151 --rc genhtml_legend=1 00:24:47.151 --rc geninfo_all_blocks=1 00:24:47.151 --rc geninfo_unexecuted_blocks=1 00:24:47.151 00:24:47.151 ' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:47.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.151 --rc genhtml_branch_coverage=1 00:24:47.151 --rc genhtml_function_coverage=1 00:24:47.151 --rc genhtml_legend=1 00:24:47.151 --rc geninfo_all_blocks=1 00:24:47.151 --rc geninfo_unexecuted_blocks=1 00:24:47.151 00:24:47.151 ' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:47.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.151 --rc genhtml_branch_coverage=1 00:24:47.151 --rc genhtml_function_coverage=1 00:24:47.151 --rc genhtml_legend=1 00:24:47.151 --rc geninfo_all_blocks=1 00:24:47.151 --rc geninfo_unexecuted_blocks=1 00:24:47.151 00:24:47.151 ' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:47.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.151 --rc genhtml_branch_coverage=1 00:24:47.151 --rc genhtml_function_coverage=1 00:24:47.151 --rc genhtml_legend=1 00:24:47.151 --rc geninfo_all_blocks=1 00:24:47.151 --rc geninfo_unexecuted_blocks=1 00:24:47.151 00:24:47.151 ' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.151 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:47.411 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:47.412 Cannot find device "nvmf_init_br" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:47.412 Cannot find device "nvmf_init_br2" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:47.412 Cannot find device "nvmf_tgt_br" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:47.412 Cannot find device "nvmf_tgt_br2" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:47.412 Cannot find device "nvmf_init_br" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:47.412 Cannot find device "nvmf_init_br2" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:47.412 Cannot find device "nvmf_tgt_br" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:47.412 Cannot find device "nvmf_tgt_br2" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:47.412 Cannot find device "nvmf_br" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:47.412 Cannot find device "nvmf_init_if" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:47.412 Cannot find device "nvmf_init_if2" 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:47.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:47.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:47.412 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:47.672 15:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:47.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:47.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:24:47.672 00:24:47.672 --- 10.0.0.3 ping statistics --- 00:24:47.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.672 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:47.672 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:47.672 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:24:47.672 00:24:47.672 --- 10.0.0.4 ping statistics --- 00:24:47.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.672 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:47.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:24:47.672 00:24:47.672 --- 10.0.0.1 ping statistics --- 00:24:47.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.672 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:24:47.672 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:47.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:24:47.672 00:24:47.672 --- 10.0.0.2 ping statistics --- 00:24:47.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.673 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.673 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=102792 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 102792 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 102792 ']' 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.932 15:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:47.932 [2024-11-20 15:32:34.231075] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:47.932 [2024-11-20 15:32:34.232217] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:24:47.932 [2024-11-20 15:32:34.232292] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.933 [2024-11-20 15:32:34.384320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:48.192 [2024-11-20 15:32:34.444357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.192 [2024-11-20 15:32:34.444421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.192 [2024-11-20 15:32:34.444431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.192 [2024-11-20 15:32:34.444440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.192 [2024-11-20 15:32:34.444447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.192 [2024-11-20 15:32:34.445859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.192 [2024-11-20 15:32:34.445951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.192 [2024-11-20 15:32:34.445952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.192 [2024-11-20 15:32:34.571986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:48.192 [2024-11-20 15:32:34.572867] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:48.193 [2024-11-20 15:32:34.572924] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:48.193 [2024-11-20 15:32:34.573163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:48.762 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.762 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:24:48.762 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.762 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.762 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:48.762 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.762 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:49.021 [2024-11-20 15:32:35.363139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.021 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:49.281 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:24:49.281 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:49.540 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:24:49.540 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:24:49.800 15:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:24:50.060 15:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=34720bd8-c47b-4267-ad13-cfa0857dde8c 00:24:50.060 15:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 34720bd8-c47b-4267-ad13-cfa0857dde8c lvol 20 00:24:50.320 15:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f9f5deb8-08b4-4994-93d6-3d421e463eb7 00:24:50.320 15:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:50.579 15:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9f5deb8-08b4-4994-93d6-3d421e463eb7 00:24:50.579 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:50.838 [2024-11-20 15:32:37.243011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:50.838 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:51.098 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=102930 00:24:51.098 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:24:51.098 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:24:52.035 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f9f5deb8-08b4-4994-93d6-3d421e463eb7 MY_SNAPSHOT 00:24:52.603 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ea46cff9-4aba-4b69-bb23-34ae177adced 00:24:52.603 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f9f5deb8-08b4-4994-93d6-3d421e463eb7 30 00:24:52.862 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone ea46cff9-4aba-4b69-bb23-34ae177adced MY_CLONE 00:24:53.156 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f8a9f125-d5fb-495c-90b9-b2abbebfcc4c 00:24:53.156 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f8a9f125-d5fb-495c-90b9-b2abbebfcc4c 00:24:53.751 15:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 102930 00:25:01.877 Initializing NVMe Controllers 00:25:01.877 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:25:01.877 Controller IO queue size 128, less than required. 00:25:01.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:01.877 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:25:01.877 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:25:01.877 Initialization complete. Launching workers. 00:25:01.877 ======================================================== 00:25:01.877 Latency(us) 00:25:01.877 Device Information : IOPS MiB/s Average min max 00:25:01.877 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 5388.10 21.05 23773.70 5453.39 104540.81 00:25:01.877 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9371.40 36.61 13666.01 4901.16 83763.42 00:25:01.877 ======================================================== 00:25:01.877 Total : 14759.50 57.65 17355.92 4901.16 104540.81 00:25:01.877 00:25:01.877 15:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:01.877 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f9f5deb8-08b4-4994-93d6-3d421e463eb7 00:25:01.877 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34720bd8-c47b-4267-ad13-cfa0857dde8c 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.136 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.136 rmmod nvme_tcp 00:25:02.136 rmmod nvme_fabrics 00:25:02.136 rmmod nvme_keyring 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 102792 ']' 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 102792 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 102792 ']' 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 102792 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102792 00:25:02.396 killing process with pid 102792 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102792' 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 102792 00:25:02.396 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 102792 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:02.655 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:02.656 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.656 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:25:02.923 00:25:02.923 real 0m15.813s 00:25:02.923 user 0m52.690s 00:25:02.923 sys 0m8.229s 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:02.923 ************************************ 00:25:02.923 END TEST nvmf_lvol 00:25:02.923 ************************************ 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:02.923 ************************************ 00:25:02.923 START TEST nvmf_lvs_grow 00:25:02.923 ************************************ 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:25:02.923 * Looking for test storage... 00:25:02.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:25:02.923 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.188 --rc genhtml_branch_coverage=1 00:25:03.188 --rc genhtml_function_coverage=1 00:25:03.188 --rc genhtml_legend=1 00:25:03.188 --rc geninfo_all_blocks=1 00:25:03.188 --rc geninfo_unexecuted_blocks=1 00:25:03.188 00:25:03.188 ' 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.188 --rc genhtml_branch_coverage=1 00:25:03.188 --rc genhtml_function_coverage=1 00:25:03.188 --rc genhtml_legend=1 00:25:03.188 --rc geninfo_all_blocks=1 00:25:03.188 --rc geninfo_unexecuted_blocks=1 00:25:03.188 00:25:03.188 ' 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.188 --rc genhtml_branch_coverage=1 00:25:03.188 --rc genhtml_function_coverage=1 00:25:03.188 --rc genhtml_legend=1 00:25:03.188 --rc geninfo_all_blocks=1 00:25:03.188 --rc geninfo_unexecuted_blocks=1 00:25:03.188 00:25:03.188 ' 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.188 --rc genhtml_branch_coverage=1 00:25:03.188 --rc genhtml_function_coverage=1 00:25:03.188 --rc genhtml_legend=1 00:25:03.188 --rc geninfo_all_blocks=1 00:25:03.188 --rc geninfo_unexecuted_blocks=1 00:25:03.188 00:25:03.188 ' 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.188 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:03.189 Cannot find device "nvmf_init_br" 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:03.189 Cannot find device "nvmf_init_br2" 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:03.189 Cannot find device "nvmf_tgt_br" 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.189 Cannot find device "nvmf_tgt_br2" 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:03.189 Cannot find device "nvmf_init_br" 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:03.189 Cannot find device "nvmf_init_br2" 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:03.189 Cannot find device "nvmf_tgt_br" 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:25:03.189 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:03.448 Cannot find device "nvmf_tgt_br2" 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:03.448 Cannot find device "nvmf_br" 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:03.448 Cannot find device "nvmf_init_if" 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:03.448 Cannot find device "nvmf_init_if2" 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:03.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:03.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:03.448 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:03.449 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:03.709 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:03.709 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:03.709 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:03.709 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:03.709 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:03.709 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:03.709 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:03.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:03.709 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:25:03.709 00:25:03.709 --- 10.0.0.3 ping statistics --- 00:25:03.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.709 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:03.709 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:03.709 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:25:03.709 00:25:03.709 --- 10.0.0.4 ping statistics --- 00:25:03.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.709 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:03.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:25:03.709 00:25:03.709 --- 10.0.0.1 ping statistics --- 00:25:03.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.709 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:03.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:25:03.709 00:25:03.709 --- 10.0.0.2 ping statistics --- 00:25:03.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.709 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.709 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=103346 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 103346 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 103346 ']' 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.710 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:03.970 [2024-11-20 15:32:50.191372] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:03.970 [2024-11-20 15:32:50.192363] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:03.970 [2024-11-20 15:32:50.192409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.970 [2024-11-20 15:32:50.332469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.970 [2024-11-20 15:32:50.372775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.970 [2024-11-20 15:32:50.372820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.970 [2024-11-20 15:32:50.372831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.970 [2024-11-20 15:32:50.372839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.970 [2024-11-20 15:32:50.372846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.970 [2024-11-20 15:32:50.373129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.970 [2024-11-20 15:32:50.444350] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:03.970 [2024-11-20 15:32:50.444613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:04.909 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.909 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:25:04.909 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:04.909 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:04.909 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:04.909 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.909 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:04.909 [2024-11-20 15:32:51.357962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:05.168 ************************************ 00:25:05.168 START TEST lvs_grow_clean 00:25:05.168 ************************************ 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:05.168 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:05.427 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:25:05.427 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:25:05.427 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:05.427 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:05.427 15:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:25:05.686 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:25:05.686 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:25:05.686 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 lvol 150 00:25:05.945 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=67b56758-42ed-43bf-9dc0-5a7609a11026 00:25:05.945 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:05.946 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:25:06.205 [2024-11-20 15:32:52.501708] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:25:06.205 [2024-11-20 15:32:52.501880] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:25:06.205 true 00:25:06.205 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:06.205 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:25:06.464 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:25:06.464 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:06.723 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67b56758-42ed-43bf-9dc0-5a7609a11026 00:25:06.723 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:06.982 [2024-11-20 15:32:53.374281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:06.982 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103507 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103507 /var/tmp/bdevperf.sock 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 103507 ']' 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.241 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:25:07.241 [2024-11-20 15:32:53.647855] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:07.241 [2024-11-20 15:32:53.647957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103507 ] 00:25:07.501 [2024-11-20 15:32:53.799311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.501 [2024-11-20 15:32:53.861877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.068 15:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.068 15:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:25:08.068 15:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:25:08.327 Nvme0n1 00:25:08.587 15:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:25:08.587 [ 00:25:08.587 { 00:25:08.587 "aliases": [ 00:25:08.587 "67b56758-42ed-43bf-9dc0-5a7609a11026" 00:25:08.587 ], 00:25:08.587 "assigned_rate_limits": { 00:25:08.587 "r_mbytes_per_sec": 0, 00:25:08.587 "rw_ios_per_sec": 0, 00:25:08.587 "rw_mbytes_per_sec": 0, 00:25:08.587 "w_mbytes_per_sec": 0 00:25:08.587 }, 00:25:08.587 "block_size": 4096, 00:25:08.587 "claimed": false, 00:25:08.587 "driver_specific": { 00:25:08.587 "mp_policy": "active_passive", 00:25:08.587 "nvme": [ 00:25:08.587 { 00:25:08.587 "ctrlr_data": { 00:25:08.587 "ana_reporting": false, 00:25:08.587 "cntlid": 1, 00:25:08.587 "firmware_revision": "25.01", 00:25:08.587 "model_number": "SPDK bdev Controller", 00:25:08.587 "multi_ctrlr": true, 00:25:08.587 "oacs": { 00:25:08.587 "firmware": 0, 00:25:08.587 "format": 0, 00:25:08.587 "ns_manage": 0, 00:25:08.587 "security": 0 00:25:08.587 }, 00:25:08.587 "serial_number": "SPDK0", 00:25:08.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.587 "vendor_id": "0x8086" 00:25:08.587 }, 00:25:08.587 "ns_data": { 00:25:08.587 "can_share": true, 00:25:08.587 "id": 1 00:25:08.587 }, 00:25:08.587 "trid": { 00:25:08.587 "adrfam": "IPv4", 00:25:08.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.587 "traddr": "10.0.0.3", 00:25:08.587 "trsvcid": "4420", 00:25:08.587 "trtype": "TCP" 00:25:08.587 }, 00:25:08.587 "vs": { 00:25:08.587 "nvme_version": "1.3" 00:25:08.587 } 00:25:08.587 } 00:25:08.587 ] 00:25:08.587 }, 00:25:08.587 "memory_domains": [ 00:25:08.587 { 00:25:08.587 "dma_device_id": "system", 00:25:08.587 "dma_device_type": 1 00:25:08.587 } 00:25:08.587 ], 00:25:08.587 "name": "Nvme0n1", 00:25:08.587 "num_blocks": 38912, 00:25:08.587 "numa_id": -1, 00:25:08.587 "product_name": "NVMe disk", 00:25:08.587 "supported_io_types": { 00:25:08.587 "abort": true, 00:25:08.587 "compare": true, 00:25:08.587 "compare_and_write": true, 00:25:08.587 "copy": true, 00:25:08.587 "flush": true, 00:25:08.587 "get_zone_info": false, 00:25:08.587 "nvme_admin": true, 00:25:08.587 "nvme_io": true, 00:25:08.587 "nvme_io_md": false, 00:25:08.587 "nvme_iov_md": false, 00:25:08.587 "read": true, 00:25:08.587 "reset": true, 00:25:08.587 "seek_data": false, 00:25:08.587 "seek_hole": false, 00:25:08.587 "unmap": true, 00:25:08.587 "write": true, 00:25:08.587 "write_zeroes": true, 00:25:08.587 "zcopy": false, 00:25:08.587 "zone_append": false, 00:25:08.587 "zone_management": false 00:25:08.587 }, 00:25:08.587 "uuid": "67b56758-42ed-43bf-9dc0-5a7609a11026", 00:25:08.587 "zoned": false 00:25:08.587 } 00:25:08.587 ] 00:25:08.587 15:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:08.587 15:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103549 00:25:08.587 15:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:25:08.846 Running I/O for 10 seconds... 00:25:09.784 Latency(us) 00:25:09.784 [2024-11-20T15:32:56.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:09.784 Nvme0n1 : 1.00 9588.00 37.45 0.00 0.00 0.00 0.00 0.00 00:25:09.784 [2024-11-20T15:32:56.266Z] =================================================================================================================== 00:25:09.784 [2024-11-20T15:32:56.266Z] Total : 9588.00 37.45 0.00 0.00 0.00 0.00 0.00 00:25:09.784 00:25:10.719 15:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:10.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:10.719 Nvme0n1 : 2.00 10078.00 39.37 0.00 0.00 0.00 0.00 0.00 00:25:10.719 [2024-11-20T15:32:57.201Z] =================================================================================================================== 00:25:10.719 [2024-11-20T15:32:57.201Z] Total : 10078.00 39.37 0.00 0.00 0.00 0.00 0.00 00:25:10.719 00:25:10.979 true 00:25:10.979 15:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:10.979 15:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:25:11.238 15:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:25:11.238 15:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:25:11.238 15:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 103549 00:25:11.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:11.806 Nvme0n1 : 3.00 10229.67 39.96 0.00 0.00 0.00 0.00 0.00 00:25:11.806 [2024-11-20T15:32:58.288Z] =================================================================================================================== 00:25:11.806 [2024-11-20T15:32:58.288Z] Total : 10229.67 39.96 0.00 0.00 0.00 0.00 0.00 00:25:11.806 00:25:12.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:12.743 Nvme0n1 : 4.00 10272.50 40.13 0.00 0.00 0.00 0.00 0.00 00:25:12.743 [2024-11-20T15:32:59.225Z] =================================================================================================================== 00:25:12.743 [2024-11-20T15:32:59.225Z] Total : 10272.50 40.13 0.00 0.00 0.00 0.00 0.00 00:25:12.743 00:25:13.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:13.743 Nvme0n1 : 5.00 10267.80 40.11 0.00 0.00 0.00 0.00 0.00 00:25:13.743 [2024-11-20T15:33:00.225Z] =================================================================================================================== 00:25:13.743 [2024-11-20T15:33:00.225Z] Total : 10267.80 40.11 0.00 0.00 0.00 0.00 0.00 00:25:13.743 00:25:14.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:14.681 Nvme0n1 : 6.00 10240.50 40.00 0.00 0.00 0.00 0.00 0.00 00:25:14.681 [2024-11-20T15:33:01.163Z] =================================================================================================================== 00:25:14.681 [2024-11-20T15:33:01.163Z] Total : 10240.50 40.00 0.00 0.00 0.00 0.00 0.00 00:25:14.681 00:25:16.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:16.061 Nvme0n1 : 7.00 10196.00 39.83 0.00 0.00 0.00 0.00 0.00 00:25:16.061 [2024-11-20T15:33:02.543Z] =================================================================================================================== 00:25:16.061 [2024-11-20T15:33:02.543Z] Total : 10196.00 39.83 0.00 0.00 0.00 0.00 0.00 00:25:16.061 00:25:16.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:16.999 Nvme0n1 : 8.00 10165.62 39.71 0.00 0.00 0.00 0.00 0.00 00:25:16.999 [2024-11-20T15:33:03.481Z] =================================================================================================================== 00:25:16.999 [2024-11-20T15:33:03.481Z] Total : 10165.62 39.71 0.00 0.00 0.00 0.00 0.00 00:25:16.999 00:25:17.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:17.935 Nvme0n1 : 9.00 10114.78 39.51 0.00 0.00 0.00 0.00 0.00 00:25:17.935 [2024-11-20T15:33:04.417Z] =================================================================================================================== 00:25:17.935 [2024-11-20T15:33:04.417Z] Total : 10114.78 39.51 0.00 0.00 0.00 0.00 0.00 00:25:17.935 00:25:18.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:18.871 Nvme0n1 : 10.00 10074.70 39.35 0.00 0.00 0.00 0.00 0.00 00:25:18.871 [2024-11-20T15:33:05.353Z] =================================================================================================================== 00:25:18.871 [2024-11-20T15:33:05.353Z] Total : 10074.70 39.35 0.00 0.00 0.00 0.00 0.00 00:25:18.871 00:25:18.871 00:25:18.871 Latency(us) 00:25:18.871 [2024-11-20T15:33:05.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:18.871 Nvme0n1 : 10.01 10075.32 39.36 0.00 0.00 12694.55 3368.92 30320.27 00:25:18.871 [2024-11-20T15:33:05.353Z] =================================================================================================================== 00:25:18.871 [2024-11-20T15:33:05.353Z] Total : 10075.32 39.36 0.00 0.00 12694.55 3368.92 30320.27 00:25:18.871 { 00:25:18.871 "results": [ 00:25:18.871 { 00:25:18.871 "job": "Nvme0n1", 00:25:18.871 "core_mask": "0x2", 00:25:18.871 "workload": "randwrite", 00:25:18.871 "status": "finished", 00:25:18.871 "queue_depth": 128, 00:25:18.871 "io_size": 4096, 00:25:18.871 "runtime": 10.012087, 00:25:18.871 "iops": 10075.321958348944, 00:25:18.871 "mibps": 39.356726399800564, 00:25:18.871 "io_failed": 0, 00:25:18.871 "io_timeout": 0, 00:25:18.871 "avg_latency_us": 12694.549575328327, 00:25:18.871 "min_latency_us": 3368.918875502008, 00:25:18.871 "max_latency_us": 30320.269879518073 00:25:18.871 } 00:25:18.871 ], 00:25:18.871 "core_count": 1 00:25:18.871 } 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103507 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 103507 ']' 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 103507 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103507 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:18.871 killing process with pid 103507 00:25:18.871 Received shutdown signal, test time was about 10.000000 seconds 00:25:18.871 00:25:18.871 Latency(us) 00:25:18.871 [2024-11-20T15:33:05.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.871 [2024-11-20T15:33:05.353Z] =================================================================================================================== 00:25:18.871 [2024-11-20T15:33:05.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103507' 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 103507 00:25:18.871 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 103507 00:25:19.130 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:19.390 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:19.650 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:19.650 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:25:19.908 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:19.909 [2024-11-20 15:33:06.321801] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:19.909 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:20.168 2024/11/20 15:33:06 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0dcb1c41-4774-4ca5-9877-2b75b4ba8618], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:25:20.168 request: 00:25:20.168 { 00:25:20.168 "method": "bdev_lvol_get_lvstores", 00:25:20.168 "params": { 00:25:20.168 "uuid": "0dcb1c41-4774-4ca5-9877-2b75b4ba8618" 00:25:20.168 } 00:25:20.168 } 00:25:20.168 Got JSON-RPC error response 00:25:20.168 GoRPCClient: error on JSON-RPC call 00:25:20.168 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:25:20.168 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:20.168 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:20.168 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:20.168 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:20.427 aio_bdev 00:25:20.427 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 67b56758-42ed-43bf-9dc0-5a7609a11026 00:25:20.427 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=67b56758-42ed-43bf-9dc0-5a7609a11026 00:25:20.427 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:20.427 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:25:20.427 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:20.427 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:20.427 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:20.685 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67b56758-42ed-43bf-9dc0-5a7609a11026 -t 2000 00:25:20.944 [ 00:25:20.944 { 00:25:20.944 "aliases": [ 00:25:20.944 "lvs/lvol" 00:25:20.944 ], 00:25:20.944 "assigned_rate_limits": { 00:25:20.944 "r_mbytes_per_sec": 0, 00:25:20.944 "rw_ios_per_sec": 0, 00:25:20.944 "rw_mbytes_per_sec": 0, 00:25:20.944 "w_mbytes_per_sec": 0 00:25:20.944 }, 00:25:20.944 "block_size": 4096, 00:25:20.944 "claimed": false, 00:25:20.944 "driver_specific": { 00:25:20.944 "lvol": { 00:25:20.944 "base_bdev": "aio_bdev", 00:25:20.944 "clone": false, 00:25:20.944 "esnap_clone": false, 00:25:20.944 "lvol_store_uuid": "0dcb1c41-4774-4ca5-9877-2b75b4ba8618", 00:25:20.944 "num_allocated_clusters": 38, 00:25:20.944 "snapshot": false, 00:25:20.944 "thin_provision": false 00:25:20.944 } 00:25:20.944 }, 00:25:20.944 "name": "67b56758-42ed-43bf-9dc0-5a7609a11026", 00:25:20.944 "num_blocks": 38912, 00:25:20.944 "product_name": "Logical Volume", 00:25:20.944 "supported_io_types": { 00:25:20.944 "abort": false, 00:25:20.944 "compare": false, 00:25:20.944 "compare_and_write": false, 00:25:20.944 "copy": false, 00:25:20.944 "flush": false, 00:25:20.944 "get_zone_info": false, 00:25:20.944 "nvme_admin": false, 00:25:20.944 "nvme_io": false, 00:25:20.944 "nvme_io_md": false, 00:25:20.944 "nvme_iov_md": false, 00:25:20.944 "read": true, 00:25:20.944 "reset": true, 00:25:20.944 "seek_data": true, 00:25:20.944 "seek_hole": true, 00:25:20.944 "unmap": true, 00:25:20.944 "write": true, 00:25:20.944 "write_zeroes": true, 00:25:20.944 "zcopy": false, 00:25:20.944 "zone_append": false, 00:25:20.944 "zone_management": false 00:25:20.944 }, 00:25:20.944 "uuid": "67b56758-42ed-43bf-9dc0-5a7609a11026", 00:25:20.944 "zoned": false 00:25:20.944 } 00:25:20.944 ] 00:25:20.944 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:25:20.944 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:20.944 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:25:21.203 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:25:21.203 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:25:21.203 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:21.462 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:25:21.462 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 67b56758-42ed-43bf-9dc0-5a7609a11026 00:25:21.462 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0dcb1c41-4774-4ca5-9877-2b75b4ba8618 00:25:21.720 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:21.979 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:22.547 ************************************ 00:25:22.547 END TEST lvs_grow_clean 00:25:22.547 ************************************ 00:25:22.547 00:25:22.547 real 0m17.479s 00:25:22.547 user 0m15.579s 00:25:22.547 sys 0m3.171s 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:22.547 ************************************ 00:25:22.547 START TEST lvs_grow_dirty 00:25:22.547 ************************************ 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:22.547 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:22.814 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:25:22.814 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:25:23.076 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:23.076 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:23.076 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:25:23.335 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:25:23.335 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:25:23.335 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 lvol 150 00:25:23.594 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6424ac1-72e3-4653-8864-51fb8333d229 00:25:23.594 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:23.594 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:25:23.853 [2024-11-20 15:33:10.085713] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:25:23.853 [2024-11-20 15:33:10.087321] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:25:23.853 true 00:25:23.853 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:23.853 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:25:23.853 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:25:23.853 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:24.111 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6424ac1-72e3-4653-8864-51fb8333d229 00:25:24.371 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:24.630 [2024-11-20 15:33:10.938216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:24.630 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:24.889 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:25:24.889 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103930 00:25:24.890 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.890 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103930 /var/tmp/bdevperf.sock 00:25:24.890 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103930 ']' 00:25:24.890 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.890 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.890 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.890 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.890 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:24.890 [2024-11-20 15:33:11.227536] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:24.890 [2024-11-20 15:33:11.227633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103930 ] 00:25:24.890 [2024-11-20 15:33:11.363870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.149 [2024-11-20 15:33:11.427358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.718 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.718 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:25:25.718 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:25:25.977 Nvme0n1 00:25:25.977 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:25:26.237 [ 00:25:26.237 { 00:25:26.237 "aliases": [ 00:25:26.237 "a6424ac1-72e3-4653-8864-51fb8333d229" 00:25:26.237 ], 00:25:26.237 "assigned_rate_limits": { 00:25:26.237 "r_mbytes_per_sec": 0, 00:25:26.237 "rw_ios_per_sec": 0, 00:25:26.237 "rw_mbytes_per_sec": 0, 00:25:26.237 "w_mbytes_per_sec": 0 00:25:26.237 }, 00:25:26.237 "block_size": 4096, 00:25:26.237 "claimed": false, 00:25:26.237 "driver_specific": { 00:25:26.237 "mp_policy": "active_passive", 00:25:26.237 "nvme": [ 00:25:26.237 { 00:25:26.237 "ctrlr_data": { 00:25:26.237 "ana_reporting": false, 00:25:26.237 "cntlid": 1, 00:25:26.237 "firmware_revision": "25.01", 00:25:26.237 "model_number": "SPDK bdev Controller", 00:25:26.237 "multi_ctrlr": true, 00:25:26.237 "oacs": { 00:25:26.237 "firmware": 0, 00:25:26.237 "format": 0, 00:25:26.237 "ns_manage": 0, 00:25:26.237 "security": 0 00:25:26.237 }, 00:25:26.237 "serial_number": "SPDK0", 00:25:26.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:26.237 "vendor_id": "0x8086" 00:25:26.237 }, 00:25:26.237 "ns_data": { 00:25:26.237 "can_share": true, 00:25:26.237 "id": 1 00:25:26.237 }, 00:25:26.237 "trid": { 00:25:26.237 "adrfam": "IPv4", 00:25:26.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:26.237 "traddr": "10.0.0.3", 00:25:26.237 "trsvcid": "4420", 00:25:26.237 "trtype": "TCP" 00:25:26.237 }, 00:25:26.237 "vs": { 00:25:26.237 "nvme_version": "1.3" 00:25:26.237 } 00:25:26.237 } 00:25:26.237 ] 00:25:26.237 }, 00:25:26.237 "memory_domains": [ 00:25:26.237 { 00:25:26.237 "dma_device_id": "system", 00:25:26.237 "dma_device_type": 1 00:25:26.237 } 00:25:26.237 ], 00:25:26.237 "name": "Nvme0n1", 00:25:26.237 "num_blocks": 38912, 00:25:26.237 "numa_id": -1, 00:25:26.237 "product_name": "NVMe disk", 00:25:26.237 "supported_io_types": { 00:25:26.237 "abort": true, 00:25:26.237 "compare": true, 00:25:26.237 "compare_and_write": true, 00:25:26.237 "copy": true, 00:25:26.237 "flush": true, 00:25:26.237 "get_zone_info": false, 00:25:26.237 "nvme_admin": true, 00:25:26.237 "nvme_io": true, 00:25:26.237 "nvme_io_md": false, 00:25:26.237 "nvme_iov_md": false, 00:25:26.237 "read": true, 00:25:26.237 "reset": true, 00:25:26.237 "seek_data": false, 00:25:26.237 "seek_hole": false, 00:25:26.237 "unmap": true, 00:25:26.237 "write": true, 00:25:26.237 "write_zeroes": true, 00:25:26.237 "zcopy": false, 00:25:26.237 "zone_append": false, 00:25:26.237 "zone_management": false 00:25:26.237 }, 00:25:26.237 "uuid": "a6424ac1-72e3-4653-8864-51fb8333d229", 00:25:26.237 "zoned": false 00:25:26.237 } 00:25:26.237 ] 00:25:26.237 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103972 00:25:26.237 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:26.237 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:25:26.497 Running I/O for 10 seconds... 00:25:27.435 Latency(us) 00:25:27.435 [2024-11-20T15:33:13.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:27.435 Nvme0n1 : 1.00 11184.00 43.69 0.00 0.00 0.00 0.00 0.00 00:25:27.435 [2024-11-20T15:33:13.917Z] =================================================================================================================== 00:25:27.435 [2024-11-20T15:33:13.917Z] Total : 11184.00 43.69 0.00 0.00 0.00 0.00 0.00 00:25:27.435 00:25:28.377 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:28.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:28.377 Nvme0n1 : 2.00 11070.00 43.24 0.00 0.00 0.00 0.00 0.00 00:25:28.377 [2024-11-20T15:33:14.859Z] =================================================================================================================== 00:25:28.377 [2024-11-20T15:33:14.859Z] Total : 11070.00 43.24 0.00 0.00 0.00 0.00 0.00 00:25:28.377 00:25:28.377 true 00:25:28.636 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:28.636 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:25:28.636 15:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:25:28.636 15:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:25:28.636 15:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 103972 00:25:29.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:29.575 Nvme0n1 : 3.00 10929.33 42.69 0.00 0.00 0.00 0.00 0.00 00:25:29.575 [2024-11-20T15:33:16.057Z] =================================================================================================================== 00:25:29.575 [2024-11-20T15:33:16.057Z] Total : 10929.33 42.69 0.00 0.00 0.00 0.00 0.00 00:25:29.575 00:25:30.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:30.514 Nvme0n1 : 4.00 10776.75 42.10 0.00 0.00 0.00 0.00 0.00 00:25:30.514 [2024-11-20T15:33:16.996Z] =================================================================================================================== 00:25:30.514 [2024-11-20T15:33:16.996Z] Total : 10776.75 42.10 0.00 0.00 0.00 0.00 0.00 00:25:30.514 00:25:31.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:31.451 Nvme0n1 : 5.00 10701.40 41.80 0.00 0.00 0.00 0.00 0.00 00:25:31.451 [2024-11-20T15:33:17.933Z] =================================================================================================================== 00:25:31.451 [2024-11-20T15:33:17.933Z] Total : 10701.40 41.80 0.00 0.00 0.00 0.00 0.00 00:25:31.451 00:25:32.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:32.392 Nvme0n1 : 6.00 10607.33 41.43 0.00 0.00 0.00 0.00 0.00 00:25:32.392 [2024-11-20T15:33:18.874Z] =================================================================================================================== 00:25:32.392 [2024-11-20T15:33:18.874Z] Total : 10607.33 41.43 0.00 0.00 0.00 0.00 0.00 00:25:32.392 00:25:33.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:33.353 Nvme0n1 : 7.00 10472.29 40.91 0.00 0.00 0.00 0.00 0.00 00:25:33.353 [2024-11-20T15:33:19.835Z] =================================================================================================================== 00:25:33.353 [2024-11-20T15:33:19.835Z] Total : 10472.29 40.91 0.00 0.00 0.00 0.00 0.00 00:25:33.353 00:25:34.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:34.288 Nvme0n1 : 8.00 10100.88 39.46 0.00 0.00 0.00 0.00 0.00 00:25:34.288 [2024-11-20T15:33:20.770Z] =================================================================================================================== 00:25:34.288 [2024-11-20T15:33:20.770Z] Total : 10100.88 39.46 0.00 0.00 0.00 0.00 0.00 00:25:34.288 00:25:35.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:35.668 Nvme0n1 : 9.00 10153.00 39.66 0.00 0.00 0.00 0.00 0.00 00:25:35.668 [2024-11-20T15:33:22.150Z] =================================================================================================================== 00:25:35.668 [2024-11-20T15:33:22.150Z] Total : 10153.00 39.66 0.00 0.00 0.00 0.00 0.00 00:25:35.668 00:25:36.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:36.606 Nvme0n1 : 10.00 10161.70 39.69 0.00 0.00 0.00 0.00 0.00 00:25:36.606 [2024-11-20T15:33:23.088Z] =================================================================================================================== 00:25:36.606 [2024-11-20T15:33:23.088Z] Total : 10161.70 39.69 0.00 0.00 0.00 0.00 0.00 00:25:36.606 00:25:36.606 00:25:36.606 Latency(us) 00:25:36.606 [2024-11-20T15:33:23.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:36.606 Nvme0n1 : 10.01 10170.65 39.73 0.00 0.00 12580.09 4553.30 237508.78 00:25:36.606 [2024-11-20T15:33:23.088Z] =================================================================================================================== 00:25:36.606 [2024-11-20T15:33:23.088Z] Total : 10170.65 39.73 0.00 0.00 12580.09 4553.30 237508.78 00:25:36.606 { 00:25:36.606 "results": [ 00:25:36.606 { 00:25:36.606 "job": "Nvme0n1", 00:25:36.606 "core_mask": "0x2", 00:25:36.606 "workload": "randwrite", 00:25:36.606 "status": "finished", 00:25:36.606 "queue_depth": 128, 00:25:36.606 "io_size": 4096, 00:25:36.606 "runtime": 10.012831, 00:25:36.606 "iops": 10170.650038935042, 00:25:36.606 "mibps": 39.72910171459001, 00:25:36.606 "io_failed": 0, 00:25:36.606 "io_timeout": 0, 00:25:36.606 "avg_latency_us": 12580.086386572637, 00:25:36.606 "min_latency_us": 4553.304417670683, 00:25:36.606 "max_latency_us": 237508.78072289156 00:25:36.606 } 00:25:36.606 ], 00:25:36.606 "core_count": 1 00:25:36.606 } 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103930 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 103930 ']' 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 103930 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103930 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:36.606 killing process with pid 103930 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103930' 00:25:36.606 Received shutdown signal, test time was about 10.000000 seconds 00:25:36.606 00:25:36.606 Latency(us) 00:25:36.606 [2024-11-20T15:33:23.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.606 [2024-11-20T15:33:23.088Z] =================================================================================================================== 00:25:36.606 [2024-11-20T15:33:23.088Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 103930 00:25:36.606 15:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 103930 00:25:36.606 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:36.866 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:37.125 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:37.125 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 103346 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 103346 00:25:37.384 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 103346 Killed "${NVMF_APP[@]}" "$@" 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=104135 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 104135 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 104135 ']' 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:25:37.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.384 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:37.384 [2024-11-20 15:33:23.793963] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:37.384 [2024-11-20 15:33:23.794964] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:37.384 [2024-11-20 15:33:23.795018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.644 [2024-11-20 15:33:23.948850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.644 [2024-11-20 15:33:24.005353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.644 [2024-11-20 15:33:24.005410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.644 [2024-11-20 15:33:24.005419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.644 [2024-11-20 15:33:24.005428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.644 [2024-11-20 15:33:24.005435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.644 [2024-11-20 15:33:24.005817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.903 [2024-11-20 15:33:24.130017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:37.903 [2024-11-20 15:33:24.130369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:38.471 15:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.471 15:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:25:38.471 15:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.471 15:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.471 15:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:38.471 15:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.471 15:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:38.471 [2024-11-20 15:33:24.942069] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:38.471 [2024-11-20 15:33:24.943050] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:38.471 [2024-11-20 15:33:24.943358] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:38.731 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:25:38.731 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a6424ac1-72e3-4653-8864-51fb8333d229 00:25:38.731 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6424ac1-72e3-4653-8864-51fb8333d229 00:25:38.731 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:38.731 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:25:38.731 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:38.731 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:38.731 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:38.990 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a6424ac1-72e3-4653-8864-51fb8333d229 -t 2000 00:25:38.990 [ 00:25:38.990 { 00:25:38.990 "aliases": [ 00:25:38.990 "lvs/lvol" 00:25:38.990 ], 00:25:38.990 "assigned_rate_limits": { 00:25:38.990 "r_mbytes_per_sec": 0, 00:25:38.990 "rw_ios_per_sec": 0, 00:25:38.990 "rw_mbytes_per_sec": 0, 00:25:38.990 "w_mbytes_per_sec": 0 00:25:38.990 }, 00:25:38.990 "block_size": 4096, 00:25:38.990 "claimed": false, 00:25:38.990 "driver_specific": { 00:25:38.990 "lvol": { 00:25:38.990 "base_bdev": "aio_bdev", 00:25:38.990 "clone": false, 00:25:38.990 "esnap_clone": false, 00:25:38.990 "lvol_store_uuid": "919e2429-ce2e-4ebd-b981-a6590a4cfae0", 00:25:38.990 "num_allocated_clusters": 38, 00:25:38.990 "snapshot": false, 00:25:38.990 "thin_provision": false 00:25:38.990 } 00:25:38.990 }, 00:25:38.990 "name": "a6424ac1-72e3-4653-8864-51fb8333d229", 00:25:38.990 "num_blocks": 38912, 00:25:38.990 "product_name": "Logical Volume", 00:25:38.990 "supported_io_types": { 00:25:38.990 "abort": false, 00:25:38.990 "compare": false, 00:25:38.990 "compare_and_write": false, 00:25:38.990 "copy": false, 00:25:38.990 "flush": false, 00:25:38.990 "get_zone_info": false, 00:25:38.990 "nvme_admin": false, 00:25:38.990 "nvme_io": false, 00:25:38.990 "nvme_io_md": false, 00:25:38.990 "nvme_iov_md": false, 00:25:38.990 "read": true, 00:25:38.990 "reset": true, 00:25:38.990 "seek_data": true, 00:25:38.990 "seek_hole": true, 00:25:38.990 "unmap": true, 00:25:38.990 "write": true, 00:25:38.990 "write_zeroes": true, 00:25:38.990 "zcopy": false, 00:25:38.990 "zone_append": false, 00:25:38.990 "zone_management": false 00:25:38.990 }, 00:25:38.990 "uuid": "a6424ac1-72e3-4653-8864-51fb8333d229", 00:25:38.990 "zoned": false 00:25:38.990 } 00:25:38.990 ] 00:25:38.990 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:38.990 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:25:38.990 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:39.249 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:25:39.249 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:25:39.249 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:39.508 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:25:39.508 15:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:39.767 [2024-11-20 15:33:26.074784] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:39.767 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:40.026 2024/11/20 15:33:26 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:919e2429-ce2e-4ebd-b981-a6590a4cfae0], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:25:40.026 request: 00:25:40.026 { 00:25:40.026 "method": "bdev_lvol_get_lvstores", 00:25:40.026 "params": { 00:25:40.026 "uuid": "919e2429-ce2e-4ebd-b981-a6590a4cfae0" 00:25:40.026 } 00:25:40.026 } 00:25:40.026 Got JSON-RPC error response 00:25:40.026 GoRPCClient: error on JSON-RPC call 00:25:40.026 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:25:40.026 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.026 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.026 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.026 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:40.285 aio_bdev 00:25:40.285 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6424ac1-72e3-4653-8864-51fb8333d229 00:25:40.285 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6424ac1-72e3-4653-8864-51fb8333d229 00:25:40.285 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:40.285 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:25:40.285 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:40.285 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:40.285 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:40.545 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a6424ac1-72e3-4653-8864-51fb8333d229 -t 2000 00:25:40.545 [ 00:25:40.545 { 00:25:40.545 "aliases": [ 00:25:40.545 "lvs/lvol" 00:25:40.545 ], 00:25:40.545 "assigned_rate_limits": { 00:25:40.545 "r_mbytes_per_sec": 0, 00:25:40.545 "rw_ios_per_sec": 0, 00:25:40.545 "rw_mbytes_per_sec": 0, 00:25:40.545 "w_mbytes_per_sec": 0 00:25:40.545 }, 00:25:40.545 "block_size": 4096, 00:25:40.545 "claimed": false, 00:25:40.545 "driver_specific": { 00:25:40.545 "lvol": { 00:25:40.545 "base_bdev": "aio_bdev", 00:25:40.545 "clone": false, 00:25:40.545 "esnap_clone": false, 00:25:40.545 "lvol_store_uuid": "919e2429-ce2e-4ebd-b981-a6590a4cfae0", 00:25:40.545 "num_allocated_clusters": 38, 00:25:40.545 "snapshot": false, 00:25:40.545 "thin_provision": false 00:25:40.545 } 00:25:40.545 }, 00:25:40.545 "name": "a6424ac1-72e3-4653-8864-51fb8333d229", 00:25:40.545 "num_blocks": 38912, 00:25:40.545 "product_name": "Logical Volume", 00:25:40.545 "supported_io_types": { 00:25:40.545 "abort": false, 00:25:40.545 "compare": false, 00:25:40.545 "compare_and_write": false, 00:25:40.545 "copy": false, 00:25:40.545 "flush": false, 00:25:40.545 "get_zone_info": false, 00:25:40.545 "nvme_admin": false, 00:25:40.545 "nvme_io": false, 00:25:40.545 "nvme_io_md": false, 00:25:40.545 "nvme_iov_md": false, 00:25:40.545 "read": true, 00:25:40.545 "reset": true, 00:25:40.545 "seek_data": true, 00:25:40.545 "seek_hole": true, 00:25:40.545 "unmap": true, 00:25:40.545 "write": true, 00:25:40.545 "write_zeroes": true, 00:25:40.545 "zcopy": false, 00:25:40.545 "zone_append": false, 00:25:40.545 "zone_management": false 00:25:40.545 }, 00:25:40.545 "uuid": "a6424ac1-72e3-4653-8864-51fb8333d229", 00:25:40.545 "zoned": false 00:25:40.545 } 00:25:40.545 ] 00:25:40.545 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:40.545 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:40.545 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:25:40.803 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:25:40.803 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:40.803 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:25:41.061 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:25:41.061 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a6424ac1-72e3-4653-8864-51fb8333d229 00:25:41.319 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 919e2429-ce2e-4ebd-b981-a6590a4cfae0 00:25:41.576 15:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:41.834 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:42.092 00:25:42.092 real 0m19.520s 00:25:42.092 user 0m25.384s 00:25:42.092 sys 0m7.930s 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:42.092 ************************************ 00:25:42.092 END TEST lvs_grow_dirty 00:25:42.092 ************************************ 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:42.092 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:42.092 nvmf_trace.0 00:25:42.349 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:25:42.349 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:25:42.349 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.349 15:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:25:42.606 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.606 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:25:42.606 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.606 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.606 rmmod nvme_tcp 00:25:42.865 rmmod nvme_fabrics 00:25:42.865 rmmod nvme_keyring 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 104135 ']' 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 104135 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 104135 ']' 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 104135 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104135 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:42.865 killing process with pid 104135 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104135' 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 104135 00:25:42.865 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 104135 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:43.123 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:25:43.382 00:25:43.382 real 0m40.495s 00:25:43.382 user 0m42.402s 00:25:43.382 sys 0m12.528s 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:43.382 ************************************ 00:25:43.382 END TEST nvmf_lvs_grow 00:25:43.382 ************************************ 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:43.382 ************************************ 00:25:43.382 START TEST nvmf_bdev_io_wait 00:25:43.382 ************************************ 00:25:43.382 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:43.642 * Looking for test storage... 00:25:43.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:43.642 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:43.642 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:25:43.642 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:43.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.642 --rc genhtml_branch_coverage=1 00:25:43.642 --rc genhtml_function_coverage=1 00:25:43.642 --rc genhtml_legend=1 00:25:43.642 --rc geninfo_all_blocks=1 00:25:43.642 --rc geninfo_unexecuted_blocks=1 00:25:43.642 00:25:43.642 ' 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:43.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.642 --rc genhtml_branch_coverage=1 00:25:43.642 --rc genhtml_function_coverage=1 00:25:43.642 --rc genhtml_legend=1 00:25:43.642 --rc geninfo_all_blocks=1 00:25:43.642 --rc geninfo_unexecuted_blocks=1 00:25:43.642 00:25:43.642 ' 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:43.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.642 --rc genhtml_branch_coverage=1 00:25:43.642 --rc genhtml_function_coverage=1 00:25:43.642 --rc genhtml_legend=1 00:25:43.642 --rc geninfo_all_blocks=1 00:25:43.642 --rc geninfo_unexecuted_blocks=1 00:25:43.642 00:25:43.642 ' 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:43.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.642 --rc genhtml_branch_coverage=1 00:25:43.642 --rc genhtml_function_coverage=1 00:25:43.642 --rc genhtml_legend=1 00:25:43.642 --rc geninfo_all_blocks=1 00:25:43.642 --rc geninfo_unexecuted_blocks=1 00:25:43.642 00:25:43.642 ' 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.642 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:43.643 Cannot find device "nvmf_init_br" 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:25:43.643 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:43.904 Cannot find device "nvmf_init_br2" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:43.904 Cannot find device "nvmf_tgt_br" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:43.904 Cannot find device "nvmf_tgt_br2" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:43.904 Cannot find device "nvmf_init_br" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:43.904 Cannot find device "nvmf_init_br2" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:43.904 Cannot find device "nvmf_tgt_br" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:43.904 Cannot find device "nvmf_tgt_br2" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:43.904 Cannot find device "nvmf_br" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:43.904 Cannot find device "nvmf_init_if" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:43.904 Cannot find device "nvmf_init_if2" 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:43.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:43.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:43.904 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:44.203 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:44.204 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:44.204 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:25:44.204 00:25:44.204 --- 10.0.0.3 ping statistics --- 00:25:44.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.204 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:44.204 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:44.204 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:25:44.204 00:25:44.204 --- 10.0.0.4 ping statistics --- 00:25:44.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.204 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:44.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:25:44.204 00:25:44.204 --- 10.0.0.1 ping statistics --- 00:25:44.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.204 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:44.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:25:44.204 00:25:44.204 --- 10.0.0.2 ping statistics --- 00:25:44.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.204 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=104608 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 104608 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 104608 ']' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:25:44.204 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:44.204 [2024-11-20 15:33:30.671111] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:44.204 [2024-11-20 15:33:30.672086] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:44.204 [2024-11-20 15:33:30.672140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.478 [2024-11-20 15:33:30.810272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:44.478 [2024-11-20 15:33:30.883231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.478 [2024-11-20 15:33:30.883308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.478 [2024-11-20 15:33:30.883319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.478 [2024-11-20 15:33:30.883328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.478 [2024-11-20 15:33:30.883335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.478 [2024-11-20 15:33:30.884802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.478 [2024-11-20 15:33:30.884987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.478 [2024-11-20 15:33:30.885085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.478 [2024-11-20 15:33:30.885087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.478 [2024-11-20 15:33:30.885766] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:45.416 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.416 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:25:45.416 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.416 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:45.416 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 [2024-11-20 15:33:31.733902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:45.417 [2024-11-20 15:33:31.734114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:45.417 [2024-11-20 15:33:31.735722] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:45.417 [2024-11-20 15:33:31.735889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 [2024-11-20 15:33:31.746125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 Malloc0 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:45.417 [2024-11-20 15:33:31.838391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=104661 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=104663 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:45.417 { 00:25:45.417 "params": { 00:25:45.417 "name": "Nvme$subsystem", 00:25:45.417 "trtype": "$TEST_TRANSPORT", 00:25:45.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.417 "adrfam": "ipv4", 00:25:45.417 "trsvcid": "$NVMF_PORT", 00:25:45.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.417 "hdgst": ${hdgst:-false}, 00:25:45.417 "ddgst": ${ddgst:-false} 00:25:45.417 }, 00:25:45.417 "method": "bdev_nvme_attach_controller" 00:25:45.417 } 00:25:45.417 EOF 00:25:45.417 )") 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=104665 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:45.417 { 00:25:45.417 "params": { 00:25:45.417 "name": "Nvme$subsystem", 00:25:45.417 "trtype": "$TEST_TRANSPORT", 00:25:45.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.417 "adrfam": "ipv4", 00:25:45.417 "trsvcid": "$NVMF_PORT", 00:25:45.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.417 "hdgst": ${hdgst:-false}, 00:25:45.417 "ddgst": ${ddgst:-false} 00:25:45.417 }, 00:25:45.417 "method": "bdev_nvme_attach_controller" 00:25:45.417 } 00:25:45.417 EOF 00:25:45.417 )") 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=104667 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:45.417 { 00:25:45.417 "params": { 00:25:45.417 "name": "Nvme$subsystem", 00:25:45.417 "trtype": "$TEST_TRANSPORT", 00:25:45.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.417 "adrfam": "ipv4", 00:25:45.417 "trsvcid": "$NVMF_PORT", 00:25:45.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.417 "hdgst": ${hdgst:-false}, 00:25:45.417 "ddgst": ${ddgst:-false} 00:25:45.417 }, 00:25:45.417 "method": "bdev_nvme_attach_controller" 00:25:45.417 } 00:25:45.417 EOF 00:25:45.417 )") 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:45.417 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:45.417 { 00:25:45.417 "params": { 00:25:45.417 "name": "Nvme$subsystem", 00:25:45.417 "trtype": "$TEST_TRANSPORT", 00:25:45.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.417 "adrfam": "ipv4", 00:25:45.417 "trsvcid": "$NVMF_PORT", 00:25:45.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.418 "hdgst": ${hdgst:-false}, 00:25:45.418 "ddgst": ${ddgst:-false} 00:25:45.418 }, 00:25:45.418 "method": "bdev_nvme_attach_controller" 00:25:45.418 } 00:25:45.418 EOF 00:25:45.418 )") 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:45.418 "params": { 00:25:45.418 "name": "Nvme1", 00:25:45.418 "trtype": "tcp", 00:25:45.418 "traddr": "10.0.0.3", 00:25:45.418 "adrfam": "ipv4", 00:25:45.418 "trsvcid": "4420", 00:25:45.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:45.418 "hdgst": false, 00:25:45.418 "ddgst": false 00:25:45.418 }, 00:25:45.418 "method": "bdev_nvme_attach_controller" 00:25:45.418 }' 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:45.418 "params": { 00:25:45.418 "name": "Nvme1", 00:25:45.418 "trtype": "tcp", 00:25:45.418 "traddr": "10.0.0.3", 00:25:45.418 "adrfam": "ipv4", 00:25:45.418 "trsvcid": "4420", 00:25:45.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:45.418 "hdgst": false, 00:25:45.418 "ddgst": false 00:25:45.418 }, 00:25:45.418 "method": "bdev_nvme_attach_controller" 00:25:45.418 }' 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:45.418 "params": { 00:25:45.418 "name": "Nvme1", 00:25:45.418 "trtype": "tcp", 00:25:45.418 "traddr": "10.0.0.3", 00:25:45.418 "adrfam": "ipv4", 00:25:45.418 "trsvcid": "4420", 00:25:45.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:45.418 "hdgst": false, 00:25:45.418 "ddgst": false 00:25:45.418 }, 00:25:45.418 "method": "bdev_nvme_attach_controller" 00:25:45.418 }' 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:45.418 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:45.418 "params": { 00:25:45.418 "name": "Nvme1", 00:25:45.418 "trtype": "tcp", 00:25:45.418 "traddr": "10.0.0.3", 00:25:45.418 "adrfam": "ipv4", 00:25:45.418 "trsvcid": "4420", 00:25:45.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:45.418 "hdgst": false, 00:25:45.418 "ddgst": false 00:25:45.418 }, 00:25:45.418 "method": "bdev_nvme_attach_controller" 00:25:45.418 }' 00:25:45.678 [2024-11-20 15:33:31.900630] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:45.678 [2024-11-20 15:33:31.900632] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:45.678 [2024-11-20 15:33:31.900720] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 15:33:31.900721] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --fil.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:45.678 e-prefix=spdk2 --proc-type=auto ] 00:25:45.678 [2024-11-20 15:33:31.903631] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:45.678 [2024-11-20 15:33:31.903802] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:25:45.678 [2024-11-20 15:33:31.923480] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:45.678 [2024-11-20 15:33:31.923543] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:25:45.678 15:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 104661 00:25:45.678 [2024-11-20 15:33:32.145053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.937 [2024-11-20 15:33:32.209492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:45.937 [2024-11-20 15:33:32.244933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.937 [2024-11-20 15:33:32.309512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:45.937 [2024-11-20 15:33:32.356551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.196 [2024-11-20 15:33:32.420165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:46.196 [2024-11-20 15:33:32.449725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.196 Running I/O for 1 seconds... 00:25:46.196 Running I/O for 1 seconds... 00:25:46.196 [2024-11-20 15:33:32.512889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:46.196 Running I/O for 1 seconds... 00:25:46.196 Running I/O for 1 seconds... 00:25:47.134 12034.00 IOPS, 47.01 MiB/s 00:25:47.134 Latency(us) 00:25:47.134 [2024-11-20T15:33:33.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.134 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:25:47.134 Nvme1n1 : 1.01 12095.69 47.25 0.00 0.00 10546.43 3553.16 12264.97 00:25:47.134 [2024-11-20T15:33:33.616Z] =================================================================================================================== 00:25:47.134 [2024-11-20T15:33:33.616Z] Total : 12095.69 47.25 0.00 0.00 10546.43 3553.16 12264.97 00:25:47.134 9626.00 IOPS, 37.60 MiB/s 00:25:47.134 Latency(us) 00:25:47.134 [2024-11-20T15:33:33.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.134 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:25:47.134 Nvme1n1 : 1.01 9678.84 37.81 0.00 0.00 13170.81 6895.76 21371.58 00:25:47.134 [2024-11-20T15:33:33.616Z] =================================================================================================================== 00:25:47.134 [2024-11-20T15:33:33.616Z] Total : 9678.84 37.81 0.00 0.00 13170.81 6895.76 21371.58 00:25:47.134 244544.00 IOPS, 955.25 MiB/s 00:25:47.134 Latency(us) 00:25:47.134 [2024-11-20T15:33:33.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.134 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:25:47.134 Nvme1n1 : 1.00 244132.82 953.64 0.00 0.00 521.60 233.59 1677.88 00:25:47.134 [2024-11-20T15:33:33.616Z] =================================================================================================================== 00:25:47.134 [2024-11-20T15:33:33.616Z] Total : 244132.82 953.64 0.00 0.00 521.60 233.59 1677.88 00:25:47.394 11079.00 IOPS, 43.28 MiB/s 00:25:47.394 Latency(us) 00:25:47.394 [2024-11-20T15:33:33.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.394 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:25:47.394 Nvme1n1 : 1.00 11171.02 43.64 0.00 0.00 11432.61 2145.05 18107.94 00:25:47.394 [2024-11-20T15:33:33.876Z] =================================================================================================================== 00:25:47.394 [2024-11-20T15:33:33.876Z] Total : 11171.02 43.64 0.00 0.00 11432.61 2145.05 18107.94 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 104663 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 104665 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 104667 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.394 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.654 rmmod nvme_tcp 00:25:47.654 rmmod nvme_fabrics 00:25:47.654 rmmod nvme_keyring 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 104608 ']' 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 104608 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 104608 ']' 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 104608 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.654 15:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104608 00:25:47.654 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:47.654 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:47.654 killing process with pid 104608 00:25:47.654 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104608' 00:25:47.654 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 104608 00:25:47.654 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 104608 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:47.913 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:25:48.171 00:25:48.171 real 0m4.729s 00:25:48.171 user 0m13.483s 00:25:48.171 sys 0m3.345s 00:25:48.171 ************************************ 00:25:48.171 END TEST nvmf_bdev_io_wait 00:25:48.171 ************************************ 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:48.171 ************************************ 00:25:48.171 START TEST nvmf_queue_depth 00:25:48.171 ************************************ 00:25:48.171 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:48.431 * Looking for test storage... 00:25:48.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:48.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.431 --rc genhtml_branch_coverage=1 00:25:48.431 --rc genhtml_function_coverage=1 00:25:48.431 --rc genhtml_legend=1 00:25:48.431 --rc geninfo_all_blocks=1 00:25:48.431 --rc geninfo_unexecuted_blocks=1 00:25:48.431 00:25:48.431 ' 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:48.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.431 --rc genhtml_branch_coverage=1 00:25:48.431 --rc genhtml_function_coverage=1 00:25:48.431 --rc genhtml_legend=1 00:25:48.431 --rc geninfo_all_blocks=1 00:25:48.431 --rc geninfo_unexecuted_blocks=1 00:25:48.431 00:25:48.431 ' 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:48.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.431 --rc genhtml_branch_coverage=1 00:25:48.431 --rc genhtml_function_coverage=1 00:25:48.431 --rc genhtml_legend=1 00:25:48.431 --rc geninfo_all_blocks=1 00:25:48.431 --rc geninfo_unexecuted_blocks=1 00:25:48.431 00:25:48.431 ' 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:48.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.431 --rc genhtml_branch_coverage=1 00:25:48.431 --rc genhtml_function_coverage=1 00:25:48.431 --rc genhtml_legend=1 00:25:48.431 --rc geninfo_all_blocks=1 00:25:48.431 --rc geninfo_unexecuted_blocks=1 00:25:48.431 00:25:48.431 ' 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.431 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:48.432 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:48.692 Cannot find device "nvmf_init_br" 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:48.692 Cannot find device "nvmf_init_br2" 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:48.692 Cannot find device "nvmf_tgt_br" 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:48.692 Cannot find device "nvmf_tgt_br2" 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:48.692 Cannot find device "nvmf_init_br" 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:25:48.692 15:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:48.692 Cannot find device "nvmf_init_br2" 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:48.692 Cannot find device "nvmf_tgt_br" 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:48.692 Cannot find device "nvmf_tgt_br2" 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:48.692 Cannot find device "nvmf_br" 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:48.692 Cannot find device "nvmf_init_if" 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:48.692 Cannot find device "nvmf_init_if2" 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:48.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:48.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:48.692 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:48.951 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:48.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:48.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:25:48.952 00:25:48.952 --- 10.0.0.3 ping statistics --- 00:25:48.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.952 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:48.952 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:48.952 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:25:48.952 00:25:48.952 --- 10.0.0.4 ping statistics --- 00:25:48.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.952 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:48.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:48.952 00:25:48.952 --- 10.0.0.1 ping statistics --- 00:25:48.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.952 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:48.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:25:48.952 00:25:48.952 --- 10.0.0.2 ping statistics --- 00:25:48.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.952 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:48.952 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=104962 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 104962 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 104962 ']' 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.211 15:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:49.211 [2024-11-20 15:33:35.507333] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:49.211 [2024-11-20 15:33:35.508354] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:49.211 [2024-11-20 15:33:35.508413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.211 [2024-11-20 15:33:35.663600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.470 [2024-11-20 15:33:35.721941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.470 [2024-11-20 15:33:35.721991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.470 [2024-11-20 15:33:35.722001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.470 [2024-11-20 15:33:35.722010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.470 [2024-11-20 15:33:35.722016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.470 [2024-11-20 15:33:35.722449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.470 [2024-11-20 15:33:35.849851] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:49.470 [2024-11-20 15:33:35.850152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 [2024-11-20 15:33:36.475349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.039 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:50.298 Malloc0 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:50.298 [2024-11-20 15:33:36.559578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.298 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=105012 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 105012 /var/tmp/bdevperf.sock 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 105012 ']' 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.299 15:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:50.299 [2024-11-20 15:33:36.622075] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:25:50.299 [2024-11-20 15:33:36.622147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105012 ] 00:25:50.299 [2024-11-20 15:33:36.776527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.558 [2024-11-20 15:33:36.837964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.126 15:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.126 15:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:51.126 15:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:51.126 15:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.126 15:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:51.126 NVMe0n1 00:25:51.126 15:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.126 15:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:51.384 Running I/O for 10 seconds... 00:25:53.256 11264.00 IOPS, 44.00 MiB/s [2024-11-20T15:33:41.114Z] 11746.50 IOPS, 45.88 MiB/s [2024-11-20T15:33:41.682Z] 11668.00 IOPS, 45.58 MiB/s [2024-11-20T15:33:43.063Z] 11839.00 IOPS, 46.25 MiB/s [2024-11-20T15:33:43.999Z] 12070.40 IOPS, 47.15 MiB/s [2024-11-20T15:33:44.934Z] 12233.17 IOPS, 47.79 MiB/s [2024-11-20T15:33:45.868Z] 12339.00 IOPS, 48.20 MiB/s [2024-11-20T15:33:46.802Z] 12436.88 IOPS, 48.58 MiB/s [2024-11-20T15:33:47.746Z] 12495.11 IOPS, 48.81 MiB/s [2024-11-20T15:33:47.746Z] 12544.00 IOPS, 49.00 MiB/s 00:26:01.264 Latency(us) 00:26:01.264 [2024-11-20T15:33:47.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.264 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:26:01.264 Verification LBA range: start 0x0 length 0x4000 00:26:01.264 NVMe0n1 : 10.06 12567.48 49.09 0.00 0.00 81208.51 19160.73 90960.81 00:26:01.264 [2024-11-20T15:33:47.746Z] =================================================================================================================== 00:26:01.264 [2024-11-20T15:33:47.746Z] Total : 12567.48 49.09 0.00 0.00 81208.51 19160.73 90960.81 00:26:01.264 { 00:26:01.264 "results": [ 00:26:01.264 { 00:26:01.264 "job": "NVMe0n1", 00:26:01.264 "core_mask": "0x1", 00:26:01.264 "workload": "verify", 00:26:01.264 "status": "finished", 00:26:01.264 "verify_range": { 00:26:01.264 "start": 0, 00:26:01.264 "length": 16384 00:26:01.264 }, 00:26:01.264 "queue_depth": 1024, 00:26:01.264 "io_size": 4096, 00:26:01.264 "runtime": 10.05739, 00:26:01.264 "iops": 12567.475259485811, 00:26:01.264 "mibps": 49.09170023236645, 00:26:01.264 "io_failed": 0, 00:26:01.264 "io_timeout": 0, 00:26:01.264 "avg_latency_us": 81208.51117575145, 00:26:01.264 "min_latency_us": 19160.72610441767, 00:26:01.264 "max_latency_us": 90960.80963855422 00:26:01.264 } 00:26:01.264 ], 00:26:01.264 "core_count": 1 00:26:01.264 } 00:26:01.264 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 105012 00:26:01.264 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 105012 ']' 00:26:01.264 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 105012 00:26:01.523 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:26:01.523 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.523 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105012 00:26:01.523 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:01.523 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:01.523 killing process with pid 105012 00:26:01.523 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105012' 00:26:01.523 Received shutdown signal, test time was about 10.000000 seconds 00:26:01.523 00:26:01.523 Latency(us) 00:26:01.523 [2024-11-20T15:33:48.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.523 [2024-11-20T15:33:48.005Z] =================================================================================================================== 00:26:01.523 [2024-11-20T15:33:48.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.523 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 105012 00:26:01.524 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 105012 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.783 rmmod nvme_tcp 00:26:01.783 rmmod nvme_fabrics 00:26:01.783 rmmod nvme_keyring 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 104962 ']' 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 104962 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 104962 ']' 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 104962 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104962 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:01.783 killing process with pid 104962 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104962' 00:26:01.783 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 104962 00:26:01.784 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 104962 00:26:02.352 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.352 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.352 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.352 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:26:02.352 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.352 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:26:02.352 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.353 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.612 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:26:02.612 00:26:02.612 real 0m14.229s 00:26:02.612 user 0m22.032s 00:26:02.612 sys 0m3.211s 00:26:02.612 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.612 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:02.612 ************************************ 00:26:02.612 END TEST nvmf_queue_depth 00:26:02.612 ************************************ 00:26:02.612 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:26:02.612 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:02.612 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.612 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:02.612 ************************************ 00:26:02.612 START TEST nvmf_target_multipath 00:26:02.612 ************************************ 00:26:02.612 15:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:26:02.612 * Looking for test storage... 00:26:02.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:02.612 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.612 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.612 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:02.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.872 --rc genhtml_branch_coverage=1 00:26:02.872 --rc genhtml_function_coverage=1 00:26:02.872 --rc genhtml_legend=1 00:26:02.872 --rc geninfo_all_blocks=1 00:26:02.872 --rc geninfo_unexecuted_blocks=1 00:26:02.872 00:26:02.872 ' 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:02.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.872 --rc genhtml_branch_coverage=1 00:26:02.872 --rc genhtml_function_coverage=1 00:26:02.872 --rc genhtml_legend=1 00:26:02.872 --rc geninfo_all_blocks=1 00:26:02.872 --rc geninfo_unexecuted_blocks=1 00:26:02.872 00:26:02.872 ' 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:02.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.872 --rc genhtml_branch_coverage=1 00:26:02.872 --rc genhtml_function_coverage=1 00:26:02.872 --rc genhtml_legend=1 00:26:02.872 --rc geninfo_all_blocks=1 00:26:02.872 --rc geninfo_unexecuted_blocks=1 00:26:02.872 00:26:02.872 ' 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:02.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.872 --rc genhtml_branch_coverage=1 00:26:02.872 --rc genhtml_function_coverage=1 00:26:02.872 --rc genhtml_legend=1 00:26:02.872 --rc geninfo_all_blocks=1 00:26:02.872 --rc geninfo_unexecuted_blocks=1 00:26:02.872 00:26:02.872 ' 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.872 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:02.873 Cannot find device "nvmf_init_br" 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:02.873 Cannot find device "nvmf_init_br2" 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:02.873 Cannot find device "nvmf_tgt_br" 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:02.873 Cannot find device "nvmf_tgt_br2" 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:02.873 Cannot find device "nvmf_init_br" 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:02.873 Cannot find device "nvmf_init_br2" 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:02.873 Cannot find device "nvmf_tgt_br" 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:02.873 Cannot find device "nvmf_tgt_br2" 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:26:02.873 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:03.133 Cannot find device "nvmf_br" 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:03.133 Cannot find device "nvmf_init_if" 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:03.133 Cannot find device "nvmf_init_if2" 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:03.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:03.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:03.133 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:03.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:03.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:26:03.393 00:26:03.393 --- 10.0.0.3 ping statistics --- 00:26:03.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.393 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:03.393 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:03.393 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:26:03.393 00:26:03.393 --- 10.0.0.4 ping statistics --- 00:26:03.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.393 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:03.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:26:03.393 00:26:03.393 --- 10.0.0.1 ping statistics --- 00:26:03.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.393 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:03.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:26:03.393 00:26:03.393 --- 10.0.0.2 ping statistics --- 00:26:03.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.393 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:03.393 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=105402 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 105402 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 105402 ']' 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.394 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:03.394 [2024-11-20 15:33:49.799921] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:03.394 [2024-11-20 15:33:49.801055] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:26:03.394 [2024-11-20 15:33:49.801203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.653 [2024-11-20 15:33:49.953313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.653 [2024-11-20 15:33:50.016810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.653 [2024-11-20 15:33:50.017131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.653 [2024-11-20 15:33:50.017152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.653 [2024-11-20 15:33:50.017160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.654 [2024-11-20 15:33:50.017168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.654 [2024-11-20 15:33:50.018731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.654 [2024-11-20 15:33:50.018554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.654 [2024-11-20 15:33:50.018736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.654 [2024-11-20 15:33:50.018642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.913 [2024-11-20 15:33:50.146410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:03.913 [2024-11-20 15:33:50.146516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:03.913 [2024-11-20 15:33:50.147182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:03.913 [2024-11-20 15:33:50.147255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:03.913 [2024-11-20 15:33:50.148168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:04.481 15:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.481 15:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:26:04.481 15:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:04.481 15:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:04.481 15:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 15:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.481 15:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:04.481 [2024-11-20 15:33:50.944133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.740 15:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:04.740 Malloc0 00:26:04.999 15:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:26:04.999 15:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:05.258 15:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:05.517 [2024-11-20 15:33:51.860032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:05.517 15:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:26:05.777 [2024-11-20 15:33:52.067913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:26:05.777 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:26:05.777 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:26:06.036 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:26:06.036 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:26:06.036 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.036 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:06.036 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=105540 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:26:07.942 15:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:26:07.942 [global] 00:26:07.942 thread=1 00:26:07.942 invalidate=1 00:26:07.942 rw=randrw 00:26:07.942 time_based=1 00:26:07.942 runtime=6 00:26:07.942 ioengine=libaio 00:26:07.942 direct=1 00:26:07.942 bs=4096 00:26:07.942 iodepth=128 00:26:07.942 norandommap=0 00:26:07.942 numjobs=1 00:26:07.942 00:26:08.201 verify_dump=1 00:26:08.201 verify_backlog=512 00:26:08.201 verify_state_save=0 00:26:08.201 do_verify=1 00:26:08.201 verify=crc32c-intel 00:26:08.201 [job0] 00:26:08.201 filename=/dev/nvme0n1 00:26:08.201 Could not set queue depth (nvme0n1) 00:26:08.201 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:08.201 fio-3.35 00:26:08.202 Starting 1 thread 00:26:09.140 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:09.399 15:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:10.778 15:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:10.778 15:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:10.778 15:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:10.778 15:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:10.778 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:11.037 15:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:11.974 15:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:11.974 15:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:11.974 15:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:11.974 15:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 105540 00:26:14.515 00:26:14.515 job0: (groupid=0, jobs=1): err= 0: pid=105561: Wed Nov 20 15:34:00 2024 00:26:14.515 read: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(344MiB/6005msec) 00:26:14.515 slat (usec): min=4, max=6635, avg=35.39, stdev=146.44 00:26:14.515 clat (usec): min=295, max=46749, avg=5857.43, stdev=1129.62 00:26:14.515 lat (usec): min=327, max=46759, avg=5892.82, stdev=1135.24 00:26:14.515 clat percentiles (usec): 00:26:14.515 | 1.00th=[ 3654], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5211], 00:26:14.515 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5932], 00:26:14.515 | 70.00th=[ 6128], 80.00th=[ 6390], 90.00th=[ 6915], 95.00th=[ 7832], 00:26:14.515 | 99.00th=[ 9372], 99.50th=[10028], 99.90th=[12780], 99.95th=[18220], 00:26:14.515 | 99.99th=[20579] 00:26:14.515 bw ( KiB/s): min=16552, max=39680, per=51.10%, avg=29987.91, stdev=8742.61, samples=11 00:26:14.515 iops : min= 4138, max= 9920, avg=7496.91, stdev=2185.62, samples=11 00:26:14.515 write: IOPS=9026, BW=35.3MiB/s (37.0MB/s)(181MiB/5143msec); 0 zone resets 00:26:14.515 slat (usec): min=9, max=7658, avg=50.13, stdev=91.14 00:26:14.515 clat (usec): min=220, max=19888, avg=5270.89, stdev=1089.15 00:26:14.515 lat (usec): min=321, max=19970, avg=5321.02, stdev=1093.19 00:26:14.515 clat percentiles (usec): 00:26:14.515 | 1.00th=[ 2900], 5.00th=[ 3720], 10.00th=[ 4080], 20.00th=[ 4686], 00:26:14.515 | 30.00th=[ 4948], 40.00th=[ 5145], 50.00th=[ 5276], 60.00th=[ 5407], 00:26:14.515 | 70.00th=[ 5538], 80.00th=[ 5735], 90.00th=[ 6063], 95.00th=[ 6849], 00:26:14.515 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[17695], 99.95th=[19006], 00:26:14.515 | 99.99th=[19792] 00:26:14.515 bw ( KiB/s): min=17096, max=38904, per=83.33%, avg=30086.73, stdev=8223.43, samples=11 00:26:14.515 iops : min= 4274, max= 9726, avg=7521.64, stdev=2055.83, samples=11 00:26:14.515 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.03% 00:26:14.515 lat (msec) : 2=0.15%, 4=4.48%, 10=94.89%, 20=0.41%, 50=0.01% 00:26:14.515 cpu : usr=7.40%, sys=33.33%, ctx=13464, majf=0, minf=127 00:26:14.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:14.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:14.515 issued rwts: total=88103,46421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:14.515 00:26:14.515 Run status group 0 (all jobs): 00:26:14.515 READ: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=344MiB (361MB), run=6005-6005msec 00:26:14.515 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=181MiB (190MB), run=5143-5143msec 00:26:14.515 00:26:14.515 Disk stats (read/write): 00:26:14.515 nvme0n1: ios=87254/45275, merge=0/0, ticks=445067/203569, in_queue=648636, util=98.55% 00:26:14.515 15:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:26:14.515 15:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:26:14.774 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:26:14.775 15:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:15.711 15:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:15.711 15:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:15.711 15:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:26:15.711 15:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:26:15.711 15:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=105686 00:26:15.711 15:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:26:15.711 15:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:26:15.969 [global] 00:26:15.969 thread=1 00:26:15.969 invalidate=1 00:26:15.969 rw=randrw 00:26:15.969 time_based=1 00:26:15.969 runtime=6 00:26:15.969 ioengine=libaio 00:26:15.969 direct=1 00:26:15.969 bs=4096 00:26:15.969 iodepth=128 00:26:15.969 norandommap=0 00:26:15.969 numjobs=1 00:26:15.969 00:26:15.969 verify_dump=1 00:26:15.969 verify_backlog=512 00:26:15.969 verify_state_save=0 00:26:15.969 do_verify=1 00:26:15.969 verify=crc32c-intel 00:26:15.969 [job0] 00:26:15.969 filename=/dev/nvme0n1 00:26:15.969 Could not set queue depth (nvme0n1) 00:26:15.969 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:15.969 fio-3.35 00:26:15.969 Starting 1 thread 00:26:16.904 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:17.163 15:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:18.541 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:18.541 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:18.541 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:18.541 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:18.541 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:18.801 15:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:19.789 15:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:19.789 15:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:19.789 15:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:19.789 15:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 105686 00:26:22.325 00:26:22.325 job0: (groupid=0, jobs=1): err= 0: pid=105707: Wed Nov 20 15:34:08 2024 00:26:22.325 read: IOPS=15.0k, BW=58.6MiB/s (61.4MB/s)(352MiB/6003msec) 00:26:22.325 slat (usec): min=6, max=5462, avg=31.71, stdev=136.46 00:26:22.325 clat (usec): min=283, max=26650, avg=5756.83, stdev=1535.65 00:26:22.325 lat (usec): min=307, max=26663, avg=5788.54, stdev=1541.73 00:26:22.325 clat percentiles (usec): 00:26:22.325 | 1.00th=[ 2606], 5.00th=[ 3851], 10.00th=[ 4293], 20.00th=[ 4817], 00:26:22.325 | 30.00th=[ 5211], 40.00th=[ 5473], 50.00th=[ 5669], 60.00th=[ 5866], 00:26:22.325 | 70.00th=[ 6063], 80.00th=[ 6390], 90.00th=[ 7046], 95.00th=[ 8160], 00:26:22.325 | 99.00th=[11731], 99.50th=[14746], 99.90th=[17171], 99.95th=[18482], 00:26:22.325 | 99.99th=[22152] 00:26:22.325 bw ( KiB/s): min=16616, max=44552, per=51.81%, avg=31090.18, stdev=8746.47, samples=11 00:26:22.325 iops : min= 4154, max=11138, avg=7772.55, stdev=2186.62, samples=11 00:26:22.325 write: IOPS=8948, BW=35.0MiB/s (36.7MB/s)(186MiB/5310msec); 0 zone resets 00:26:22.325 slat (usec): min=11, max=2053, avg=45.35, stdev=73.71 00:26:22.325 clat (usec): min=175, max=24200, avg=5111.93, stdev=1596.21 00:26:22.325 lat (usec): min=247, max=24236, avg=5157.28, stdev=1600.81 00:26:22.325 clat percentiles (usec): 00:26:22.325 | 1.00th=[ 1483], 5.00th=[ 3163], 10.00th=[ 3556], 20.00th=[ 4047], 00:26:22.325 | 30.00th=[ 4490], 40.00th=[ 4948], 50.00th=[ 5145], 60.00th=[ 5342], 00:26:22.325 | 70.00th=[ 5538], 80.00th=[ 5735], 90.00th=[ 6128], 95.00th=[ 7046], 00:26:22.325 | 99.00th=[12911], 99.50th=[14484], 99.90th=[16188], 99.95th=[18482], 00:26:22.325 | 99.99th=[23725] 00:26:22.325 bw ( KiB/s): min=17624, max=44824, per=86.89%, avg=31101.82, stdev=8353.53, samples=11 00:26:22.325 iops : min= 4406, max=11206, avg=7775.45, stdev=2088.38, samples=11 00:26:22.325 lat (usec) : 250=0.01%, 500=0.05%, 750=0.12%, 1000=0.19% 00:26:22.325 lat (msec) : 2=0.58%, 4=9.73%, 10=87.87%, 20=1.43%, 50=0.04% 00:26:22.325 cpu : usr=7.11%, sys=33.15%, ctx=13564, majf=0, minf=90 00:26:22.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:22.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:22.325 issued rwts: total=90053,47516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:22.325 00:26:22.325 Run status group 0 (all jobs): 00:26:22.325 READ: bw=58.6MiB/s (61.4MB/s), 58.6MiB/s-58.6MiB/s (61.4MB/s-61.4MB/s), io=352MiB (369MB), run=6003-6003msec 00:26:22.325 WRITE: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=186MiB (195MB), run=5310-5310msec 00:26:22.325 00:26:22.325 Disk stats (read/write): 00:26:22.325 nvme0n1: ios=89194/46363, merge=0/0, ticks=453303/208287, in_queue=661590, util=98.63% 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:22.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:26:22.325 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.585 rmmod nvme_tcp 00:26:22.585 rmmod nvme_fabrics 00:26:22.585 rmmod nvme_keyring 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 105402 ']' 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 105402 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 105402 ']' 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 105402 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105402 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.585 killing process with pid 105402 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105402' 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 105402 00:26:22.585 15:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 105402 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:22.857 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:22.858 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:22.858 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.118 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:26:23.379 00:26:23.379 real 0m20.678s 00:26:23.379 user 1m7.377s 00:26:23.379 sys 0m12.110s 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:23.379 ************************************ 00:26:23.379 END TEST nvmf_target_multipath 00:26:23.379 ************************************ 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:23.379 ************************************ 00:26:23.379 START TEST nvmf_zcopy 00:26:23.379 ************************************ 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:26:23.379 * Looking for test storage... 00:26:23.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:26:23.379 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:23.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.640 --rc genhtml_branch_coverage=1 00:26:23.640 --rc genhtml_function_coverage=1 00:26:23.640 --rc genhtml_legend=1 00:26:23.640 --rc geninfo_all_blocks=1 00:26:23.640 --rc geninfo_unexecuted_blocks=1 00:26:23.640 00:26:23.640 ' 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:23.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.640 --rc genhtml_branch_coverage=1 00:26:23.640 --rc genhtml_function_coverage=1 00:26:23.640 --rc genhtml_legend=1 00:26:23.640 --rc geninfo_all_blocks=1 00:26:23.640 --rc geninfo_unexecuted_blocks=1 00:26:23.640 00:26:23.640 ' 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:23.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.640 --rc genhtml_branch_coverage=1 00:26:23.640 --rc genhtml_function_coverage=1 00:26:23.640 --rc genhtml_legend=1 00:26:23.640 --rc geninfo_all_blocks=1 00:26:23.640 --rc geninfo_unexecuted_blocks=1 00:26:23.640 00:26:23.640 ' 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:23.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.640 --rc genhtml_branch_coverage=1 00:26:23.640 --rc genhtml_function_coverage=1 00:26:23.640 --rc genhtml_legend=1 00:26:23.640 --rc geninfo_all_blocks=1 00:26:23.640 --rc geninfo_unexecuted_blocks=1 00:26:23.640 00:26:23.640 ' 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.640 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:23.641 Cannot find device "nvmf_init_br" 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:26:23.641 15:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:23.641 Cannot find device "nvmf_init_br2" 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:23.641 Cannot find device "nvmf_tgt_br" 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:23.641 Cannot find device "nvmf_tgt_br2" 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:23.641 Cannot find device "nvmf_init_br" 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:23.641 Cannot find device "nvmf_init_br2" 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:23.641 Cannot find device "nvmf_tgt_br" 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:23.641 Cannot find device "nvmf_tgt_br2" 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:26:23.641 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:23.900 Cannot find device "nvmf_br" 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:23.900 Cannot find device "nvmf_init_if" 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:23.900 Cannot find device "nvmf_init_if2" 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:23.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:23.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:23.900 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:23.901 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:23.901 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:23.901 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:23.901 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:24.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:24.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:26:24.159 00:26:24.159 --- 10.0.0.3 ping statistics --- 00:26:24.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.159 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:24.159 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:24.159 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:26:24.159 00:26:24.159 --- 10.0.0.4 ping statistics --- 00:26:24.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.159 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:24.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:24.159 00:26:24.159 --- 10.0.0.1 ping statistics --- 00:26:24.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.159 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:24.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:26:24.159 00:26:24.159 --- 10.0.0.2 ping statistics --- 00:26:24.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.159 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=106041 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 106041 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 106041 ']' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.159 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:24.160 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:26:24.160 [2024-11-20 15:34:10.593177] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:24.160 [2024-11-20 15:34:10.594061] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:26:24.160 [2024-11-20 15:34:10.594103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.418 [2024-11-20 15:34:10.746912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.418 [2024-11-20 15:34:10.808504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.418 [2024-11-20 15:34:10.808553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.418 [2024-11-20 15:34:10.808564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.418 [2024-11-20 15:34:10.808573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.418 [2024-11-20 15:34:10.808580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.418 [2024-11-20 15:34:10.808974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.677 [2024-11-20 15:34:10.936906] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:24.677 [2024-11-20 15:34:10.937242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:25.245 [2024-11-20 15:34:11.533988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:25.245 [2024-11-20 15:34:11.558318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:25.245 malloc0 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:25.245 { 00:26:25.245 "params": { 00:26:25.245 "name": "Nvme$subsystem", 00:26:25.245 "trtype": "$TEST_TRANSPORT", 00:26:25.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:25.245 "adrfam": "ipv4", 00:26:25.245 "trsvcid": "$NVMF_PORT", 00:26:25.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:25.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:25.245 "hdgst": ${hdgst:-false}, 00:26:25.245 "ddgst": ${ddgst:-false} 00:26:25.245 }, 00:26:25.245 "method": "bdev_nvme_attach_controller" 00:26:25.245 } 00:26:25.245 EOF 00:26:25.245 )") 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:26:25.245 15:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:25.245 "params": { 00:26:25.245 "name": "Nvme1", 00:26:25.245 "trtype": "tcp", 00:26:25.245 "traddr": "10.0.0.3", 00:26:25.245 "adrfam": "ipv4", 00:26:25.245 "trsvcid": "4420", 00:26:25.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:25.245 "hdgst": false, 00:26:25.245 "ddgst": false 00:26:25.245 }, 00:26:25.246 "method": "bdev_nvme_attach_controller" 00:26:25.246 }' 00:26:25.246 [2024-11-20 15:34:11.672277] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:26:25.246 [2024-11-20 15:34:11.672355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106092 ] 00:26:25.504 [2024-11-20 15:34:11.822508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.504 [2024-11-20 15:34:11.884388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.763 Running I/O for 10 seconds... 00:26:27.635 7535.00 IOPS, 58.87 MiB/s [2024-11-20T15:34:15.056Z] 7564.50 IOPS, 59.10 MiB/s [2024-11-20T15:34:16.448Z] 7573.00 IOPS, 59.16 MiB/s [2024-11-20T15:34:17.384Z] 7597.75 IOPS, 59.36 MiB/s [2024-11-20T15:34:18.320Z] 7616.60 IOPS, 59.50 MiB/s [2024-11-20T15:34:19.256Z] 7622.50 IOPS, 59.55 MiB/s [2024-11-20T15:34:20.193Z] 7626.57 IOPS, 59.58 MiB/s [2024-11-20T15:34:21.130Z] 7629.38 IOPS, 59.60 MiB/s [2024-11-20T15:34:22.068Z] 7625.89 IOPS, 59.58 MiB/s [2024-11-20T15:34:22.068Z] 7636.00 IOPS, 59.66 MiB/s 00:26:35.586 Latency(us) 00:26:35.586 [2024-11-20T15:34:22.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:26:35.586 Verification LBA range: start 0x0 length 0x1000 00:26:35.586 Nvme1n1 : 10.01 7639.80 59.69 0.00 0.00 16712.22 2197.69 22845.48 00:26:35.586 [2024-11-20T15:34:22.068Z] =================================================================================================================== 00:26:35.586 [2024-11-20T15:34:22.068Z] Total : 7639.80 59.69 0.00 0.00 16712.22 2197.69 22845.48 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=106209 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.846 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.846 { 00:26:35.846 "params": { 00:26:35.846 "name": "Nvme$subsystem", 00:26:35.846 "trtype": "$TEST_TRANSPORT", 00:26:35.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.846 "adrfam": "ipv4", 00:26:35.846 "trsvcid": "$NVMF_PORT", 00:26:35.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.846 "hdgst": ${hdgst:-false}, 00:26:35.846 "ddgst": ${ddgst:-false} 00:26:35.846 }, 00:26:35.846 "method": "bdev_nvme_attach_controller" 00:26:35.846 } 00:26:35.846 EOF 00:26:35.846 )") 00:26:35.846 [2024-11-20 15:34:22.217540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.217592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:35.847 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:26:35.847 [2024-11-20 15:34:22.229493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.229519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:26:35.847 15:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:35.847 "params": { 00:26:35.847 "name": "Nvme1", 00:26:35.847 "trtype": "tcp", 00:26:35.847 "traddr": "10.0.0.3", 00:26:35.847 "adrfam": "ipv4", 00:26:35.847 "trsvcid": "4420", 00:26:35.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.847 "hdgst": false, 00:26:35.847 "ddgst": false 00:26:35.847 }, 00:26:35.847 "method": "bdev_nvme_attach_controller" 00:26:35.847 }' 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:35.847 [2024-11-20 15:34:22.245488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.245508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:35.847 [2024-11-20 15:34:22.257488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.257508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:35.847 [2024-11-20 15:34:22.268610] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:26:35.847 [2024-11-20 15:34:22.268686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106209 ] 00:26:35.847 [2024-11-20 15:34:22.269505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.269526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:35.847 [2024-11-20 15:34:22.281499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.281523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:35.847 [2024-11-20 15:34:22.293494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.293515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:35.847 [2024-11-20 15:34:22.305488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.305510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:35.847 [2024-11-20 15:34:22.317507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:35.847 [2024-11-20 15:34:22.317528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:35.847 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.329477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.329498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.341489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.341510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.357479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.357503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.373518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.373542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.385508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.385532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.401476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.401499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.413491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.413511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.419777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.107 [2024-11-20 15:34:22.425483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.425505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.437487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.437507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.449497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.449519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.458736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.107 [2024-11-20 15:34:22.461483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.461504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.473498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.473521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.485481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.485503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.497493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.497515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.509489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.509510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.107 [2024-11-20 15:34:22.521489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.107 [2024-11-20 15:34:22.521510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.107 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.108 [2024-11-20 15:34:22.533489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.108 [2024-11-20 15:34:22.533509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.108 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.108 [2024-11-20 15:34:22.545519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.108 [2024-11-20 15:34:22.545554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.108 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.108 [2024-11-20 15:34:22.557508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.108 [2024-11-20 15:34:22.557537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.108 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.108 [2024-11-20 15:34:22.569503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.108 [2024-11-20 15:34:22.569530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.108 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.108 [2024-11-20 15:34:22.581503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.108 [2024-11-20 15:34:22.581530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.108 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.593505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.593535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.605506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.605535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 Running I/O for 5 seconds... 00:26:36.367 [2024-11-20 15:34:22.622996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.623032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.641667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.641714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.658603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.658639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.674457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.674491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.691796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.691829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.709859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.709894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.726532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.726568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.743298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.743335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.761742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.761776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.778698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.778732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.794932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.794968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.811156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.811192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.829367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.829402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.367 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.367 [2024-11-20 15:34:22.842086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.367 [2024-11-20 15:34:22.842121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.368 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.627 [2024-11-20 15:34:22.859298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.627 [2024-11-20 15:34:22.859335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.627 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:22.875382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:22.875419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:22.891789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:22.891824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:22.909954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:22.909991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:22.926612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:22.926649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:22.943092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:22.943128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:22.959790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:22.959824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:22.978043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:22.978079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:22.994880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:22.994914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:23.013485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:23.013522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:23.027647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:23.027710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:23.046351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:23.046394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:23.063489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:23.063525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:23.080272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:23.080307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.628 [2024-11-20 15:34:23.098012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.628 [2024-11-20 15:34:23.098047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.628 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.888 [2024-11-20 15:34:23.114821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.888 [2024-11-20 15:34:23.114855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.888 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.888 [2024-11-20 15:34:23.131659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.888 [2024-11-20 15:34:23.131721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.888 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.888 [2024-11-20 15:34:23.150001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.888 [2024-11-20 15:34:23.150037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.888 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.888 [2024-11-20 15:34:23.167389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.888 [2024-11-20 15:34:23.167426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.888 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.186183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.186217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.202734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.202768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.219585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.219622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.238054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.238089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.255778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.255806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.274620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.274653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.292150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.292181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.311267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.311300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.329925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.329957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.347067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.347101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:36.889 [2024-11-20 15:34:23.365525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:36.889 [2024-11-20 15:34:23.365557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:36.889 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.148 [2024-11-20 15:34:23.379585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.148 [2024-11-20 15:34:23.379619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.148 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.148 [2024-11-20 15:34:23.396512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.148 [2024-11-20 15:34:23.396545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.148 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.148 [2024-11-20 15:34:23.414012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.148 [2024-11-20 15:34:23.414043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.148 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.430896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.430928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.448158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.448194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.465845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.465879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.482891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.482934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.499319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.499351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.515025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.515056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.530368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.530399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.549644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.549681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.563486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.563517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.579506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.579536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 [2024-11-20 15:34:23.597785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.597814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.149 14695.00 IOPS, 114.80 MiB/s [2024-11-20T15:34:23.631Z] [2024-11-20 15:34:23.613424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.149 [2024-11-20 15:34:23.613453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.149 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.630550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.630584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.647553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.647586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.665296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.665327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.681591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.681621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.694854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.694883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.710593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.710623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.726250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.726279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.741688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.741716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.753811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.753840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.770517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.770546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.786029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.786058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.802206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.802235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.818705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.818733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.833871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.833900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.849942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.849970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.866041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.866071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.409 [2024-11-20 15:34:23.885374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.409 [2024-11-20 15:34:23.885411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.409 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.669 [2024-11-20 15:34:23.899968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.669 [2024-11-20 15:34:23.899997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.669 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.669 [2024-11-20 15:34:23.917837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.669 [2024-11-20 15:34:23.917867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.669 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.669 [2024-11-20 15:34:23.936691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.669 [2024-11-20 15:34:23.936720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.669 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.669 [2024-11-20 15:34:23.951762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:23.951790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:23.969573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:23.969602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:23.983817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:23.983847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:23.999100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:23.999130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.014609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.014639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.030499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.030528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.049145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.049175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.065530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.065559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.077563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.077592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.091372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.091401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.106800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.106841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.124838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.124868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.670 [2024-11-20 15:34:24.139971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.670 [2024-11-20 15:34:24.139999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.670 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.930 [2024-11-20 15:34:24.156167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.930 [2024-11-20 15:34:24.156197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.930 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.930 [2024-11-20 15:34:24.173496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.930 [2024-11-20 15:34:24.173527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.930 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.930 [2024-11-20 15:34:24.187047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.930 [2024-11-20 15:34:24.187077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.930 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.202534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.202564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.221053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.221082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.238108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.238138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.256306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.256335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.273689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.273719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.287660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.287697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.303503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.303532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.318981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.319009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.333794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.333821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.350251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.350280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.368496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.368526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.385630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.385659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:37.931 [2024-11-20 15:34:24.398299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:37.931 [2024-11-20 15:34:24.398327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:37.931 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.191 [2024-11-20 15:34:24.414415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.191 [2024-11-20 15:34:24.414445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.191 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.191 [2024-11-20 15:34:24.429631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.429661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.442550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.442579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.458389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.458436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.474130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.474158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.489790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.489817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.506590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.506619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.521963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.521991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.541617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.541644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.555037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.555065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.570991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.571021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.586517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.586545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 15273.00 IOPS, 119.32 MiB/s [2024-11-20T15:34:24.674Z] [2024-11-20 15:34:24.613774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.613801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.629576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.629604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.642832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.642858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.192 [2024-11-20 15:34:24.658585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.192 [2024-11-20 15:34:24.658615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.192 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.674178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.674208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.689970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.689997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.706900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.706929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.724815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.724844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.741812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.741840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.757564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.757591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.771533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.771564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.787377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.787407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.802698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.802727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.818384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.818412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.451 [2024-11-20 15:34:24.833433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.451 [2024-11-20 15:34:24.833461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.451 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.452 [2024-11-20 15:34:24.844373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.452 [2024-11-20 15:34:24.844401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.452 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.452 [2024-11-20 15:34:24.861096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.452 [2024-11-20 15:34:24.861124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.452 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.452 [2024-11-20 15:34:24.877410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.452 [2024-11-20 15:34:24.877439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.452 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.452 [2024-11-20 15:34:24.891724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.452 [2024-11-20 15:34:24.891752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.452 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.452 [2024-11-20 15:34:24.907413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.452 [2024-11-20 15:34:24.907441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.452 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.452 [2024-11-20 15:34:24.922569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.452 [2024-11-20 15:34:24.922597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.452 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.711 [2024-11-20 15:34:24.938537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.711 [2024-11-20 15:34:24.938565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.711 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.711 [2024-11-20 15:34:24.954823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:24.954852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:24.970206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:24.970234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:24.986440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:24.986470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.001903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.001930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.017812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.017839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.033463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.033493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.046512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.046540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.062308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.062337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.077768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.077796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.097010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.097039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.111842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.111870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.128826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.128855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.145352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.145381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.159304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.159333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.175284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.175313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.712 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.712 [2024-11-20 15:34:25.190630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.712 [2024-11-20 15:34:25.190659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.206609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.206640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.222023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.222053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.238495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.238526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.254156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.254184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.269355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.269384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.281960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.281987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.297825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.297853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.313464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.313490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.327656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.327694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.343225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.343253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.358926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.358955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.374308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.374337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.389643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.389683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.401814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.401842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.417411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.417439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.430441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.430469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:38.972 [2024-11-20 15:34:25.445730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:38.972 [2024-11-20 15:34:25.445758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:38.972 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.456963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.456991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.232 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.471871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.471899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.232 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.487546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.487575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.232 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.502848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.502878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.232 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.518895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.518925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.232 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.534806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.534834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.232 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.549942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.549969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.232 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.565521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.565548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.232 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.232 [2024-11-20 15:34:25.579434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.232 [2024-11-20 15:34:25.579463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.233 [2024-11-20 15:34:25.594834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.233 [2024-11-20 15:34:25.594862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.233 15486.33 IOPS, 120.99 MiB/s [2024-11-20T15:34:25.715Z] [2024-11-20 15:34:25.610619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.233 [2024-11-20 15:34:25.610649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.233 [2024-11-20 15:34:25.626233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.233 [2024-11-20 15:34:25.626262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.233 [2024-11-20 15:34:25.642316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.233 [2024-11-20 15:34:25.642344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.233 [2024-11-20 15:34:25.657920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.233 [2024-11-20 15:34:25.657948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.233 [2024-11-20 15:34:25.674439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.233 [2024-11-20 15:34:25.674470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.233 [2024-11-20 15:34:25.689888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.233 [2024-11-20 15:34:25.689926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.233 [2024-11-20 15:34:25.705248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.233 [2024-11-20 15:34:25.705279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.233 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.721522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.721551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.733517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.733544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.747414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.747444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.763063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.763091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.778230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.778258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.794460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.794488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.809713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.809740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.820639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.820668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.835962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.835989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.853923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.853951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.869758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.869784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.886543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.886571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.902588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.902617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.918054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.918082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.934006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.934034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.949758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.949785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.493 [2024-11-20 15:34:25.966138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.493 [2024-11-20 15:34:25.966166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.493 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:25.982570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:25.982599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:25.998254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:25.998282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.016558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.016587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.033665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.033705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.045766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.045794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.061757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.061787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.077578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.077610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.088552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.088598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.105468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.105497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.119692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.119730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.135358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.135582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.150942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.151093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.166674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.166827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.184620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.184787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.754 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.754 [2024-11-20 15:34:26.201317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.754 [2024-11-20 15:34:26.201458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.755 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.755 [2024-11-20 15:34:26.213822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.755 [2024-11-20 15:34:26.213952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.755 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:39.755 [2024-11-20 15:34:26.233291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:39.755 [2024-11-20 15:34:26.233451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:39.755 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.247166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.247200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.264645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.264818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.281863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.282002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.297443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.297590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.310418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.310565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.326252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.326405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.345356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.345501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.359748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.359905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.377160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.377189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.394101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.394240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.413352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.014 [2024-11-20 15:34:26.413488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.014 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.014 [2024-11-20 15:34:26.427545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.015 [2024-11-20 15:34:26.427701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.015 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.015 [2024-11-20 15:34:26.445605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.015 [2024-11-20 15:34:26.445767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.015 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.015 [2024-11-20 15:34:26.459370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.015 [2024-11-20 15:34:26.459505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.015 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.015 [2024-11-20 15:34:26.477468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.015 [2024-11-20 15:34:26.477599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.015 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.015 [2024-11-20 15:34:26.491844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.015 [2024-11-20 15:34:26.491989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.272 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.272 [2024-11-20 15:34:26.509364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.272 [2024-11-20 15:34:26.509395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.272 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.523020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.523048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.538310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.538340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.556601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.556776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.574420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.574571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.593088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.593228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 15561.25 IOPS, 121.57 MiB/s [2024-11-20T15:34:26.755Z] [2024-11-20 15:34:26.609705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.609845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.623152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.623283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.638978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.639131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.657013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.657041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.674125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.674154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.692198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.692228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.709358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.709387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.720459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.720488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.273 [2024-11-20 15:34:26.737302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.273 [2024-11-20 15:34:26.737331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.273 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.753598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.753627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.766585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.766613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.782008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.782035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.798320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.798348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.814082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.814109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.829864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.829892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.845847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.845875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.861596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.861625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.872470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.872499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.889486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.889514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.903261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.903293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.918944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.918973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.934409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.934440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.952621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.952650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.969805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.969832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:26.985895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:26.985922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.532 [2024-11-20 15:34:27.004522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.532 [2024-11-20 15:34:27.004551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.532 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.021915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.021944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.037822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.037849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.053734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.053762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.069542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.069571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.083790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.083818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.100605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.100634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.117696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.117723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.130348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.130384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.145563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.145591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.157731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.157759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.174228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.174257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.192331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.192360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.207557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.207585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.223507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.223537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.792 [2024-11-20 15:34:27.240928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.792 [2024-11-20 15:34:27.240958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.792 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:40.793 [2024-11-20 15:34:27.258186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:40.793 [2024-11-20 15:34:27.258215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:40.793 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.274473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.274505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.289908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.289937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.305865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.305894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.325111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.325140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.340114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.340143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.357410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.357439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.371089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.371119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.386607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.386636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.402200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.402229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.417760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.417789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.436528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.436559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.052 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.052 [2024-11-20 15:34:27.453983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.052 [2024-11-20 15:34:27.454011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.053 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.053 [2024-11-20 15:34:27.469840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.053 [2024-11-20 15:34:27.469868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.053 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.053 [2024-11-20 15:34:27.485860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.053 [2024-11-20 15:34:27.485889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.053 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.053 [2024-11-20 15:34:27.502152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.053 [2024-11-20 15:34:27.502179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.053 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.053 [2024-11-20 15:34:27.521036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.053 [2024-11-20 15:34:27.521065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.053 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.312 [2024-11-20 15:34:27.537973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.312 [2024-11-20 15:34:27.538002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.312 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.312 [2024-11-20 15:34:27.556616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.312 [2024-11-20 15:34:27.556645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.312 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.312 [2024-11-20 15:34:27.572071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.312 [2024-11-20 15:34:27.572100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.312 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.312 [2024-11-20 15:34:27.589498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.312 [2024-11-20 15:34:27.589526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.312 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.312 [2024-11-20 15:34:27.603058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.312 [2024-11-20 15:34:27.603086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.312 15612.40 IOPS, 121.97 MiB/s [2024-11-20T15:34:27.794Z] 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.312 00:26:41.312 Latency(us) 00:26:41.312 [2024-11-20T15:34:27.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.312 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:26:41.312 Nvme1n1 : 5.01 15618.76 122.02 0.00 0.00 8188.64 1868.70 14949.58 00:26:41.312 [2024-11-20T15:34:27.794Z] =================================================================================================================== 00:26:41.312 [2024-11-20T15:34:27.794Z] Total : 15618.76 122.02 0.00 0.00 8188.64 1868.70 14949.58 00:26:41.312 [2024-11-20 15:34:27.617484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.312 [2024-11-20 15:34:27.617514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.312 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.312 [2024-11-20 15:34:27.629487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.312 [2024-11-20 15:34:27.629511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.312 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.641476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.641492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.653478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.653497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.665478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.665494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.677478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.677496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.689477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.689496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.701480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.701498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.717468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.717483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.733470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.733485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.749474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.749492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.765473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.765488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.313 [2024-11-20 15:34:27.781475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.313 [2024-11-20 15:34:27.781493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.313 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.572 [2024-11-20 15:34:27.797482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.572 [2024-11-20 15:34:27.797508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.572 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.572 [2024-11-20 15:34:27.809474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.572 [2024-11-20 15:34:27.809497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.572 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.572 [2024-11-20 15:34:27.821474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.572 [2024-11-20 15:34:27.821492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.572 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.572 [2024-11-20 15:34:27.837475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.572 [2024-11-20 15:34:27.837493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.572 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.572 [2024-11-20 15:34:27.853471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.572 [2024-11-20 15:34:27.853487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.572 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.572 [2024-11-20 15:34:27.865472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:41.572 [2024-11-20 15:34:27.865492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:41.573 2024/11/20 15:34:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:41.573 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (106209) - No such process 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 106209 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:41.573 delay0 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.573 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:26:41.832 [2024-11-20 15:34:28.096418] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:49.956 Initializing NVMe Controllers 00:26:49.956 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:49.956 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:49.956 Initialization complete. Launching workers. 00:26:49.956 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 221, failed: 35571 00:26:49.956 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 35645, failed to submit 147 00:26:49.956 success 35585, unsuccessful 60, failed 0 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.956 rmmod nvme_tcp 00:26:49.956 rmmod nvme_fabrics 00:26:49.956 rmmod nvme_keyring 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 106041 ']' 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 106041 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 106041 ']' 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 106041 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106041 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:49.956 killing process with pid 106041 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106041' 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 106041 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 106041 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.956 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:26:49.956 00:26:49.956 real 0m26.089s 00:26:49.956 user 0m35.350s 00:26:49.956 sys 0m12.213s 00:26:49.957 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.957 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:49.957 ************************************ 00:26:49.957 END TEST nvmf_zcopy 00:26:49.957 ************************************ 00:26:49.957 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:26:49.957 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:49.957 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.957 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:49.957 ************************************ 00:26:49.957 START TEST nvmf_nmic 00:26:49.957 ************************************ 00:26:49.957 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:26:49.957 * Looking for test storage... 00:26:49.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.957 --rc genhtml_branch_coverage=1 00:26:49.957 --rc genhtml_function_coverage=1 00:26:49.957 --rc genhtml_legend=1 00:26:49.957 --rc geninfo_all_blocks=1 00:26:49.957 --rc geninfo_unexecuted_blocks=1 00:26:49.957 00:26:49.957 ' 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.957 --rc genhtml_branch_coverage=1 00:26:49.957 --rc genhtml_function_coverage=1 00:26:49.957 --rc genhtml_legend=1 00:26:49.957 --rc geninfo_all_blocks=1 00:26:49.957 --rc geninfo_unexecuted_blocks=1 00:26:49.957 00:26:49.957 ' 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.957 --rc genhtml_branch_coverage=1 00:26:49.957 --rc genhtml_function_coverage=1 00:26:49.957 --rc genhtml_legend=1 00:26:49.957 --rc geninfo_all_blocks=1 00:26:49.957 --rc geninfo_unexecuted_blocks=1 00:26:49.957 00:26:49.957 ' 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.957 --rc genhtml_branch_coverage=1 00:26:49.957 --rc genhtml_function_coverage=1 00:26:49.957 --rc genhtml_legend=1 00:26:49.957 --rc geninfo_all_blocks=1 00:26:49.957 --rc geninfo_unexecuted_blocks=1 00:26:49.957 00:26:49.957 ' 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.957 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:49.958 Cannot find device "nvmf_init_br" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:49.958 Cannot find device "nvmf_init_br2" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:49.958 Cannot find device "nvmf_tgt_br" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:49.958 Cannot find device "nvmf_tgt_br2" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:49.958 Cannot find device "nvmf_init_br" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:49.958 Cannot find device "nvmf_init_br2" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:49.958 Cannot find device "nvmf_tgt_br" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:49.958 Cannot find device "nvmf_tgt_br2" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:49.958 Cannot find device "nvmf_br" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:49.958 Cannot find device "nvmf_init_if" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:49.958 Cannot find device "nvmf_init_if2" 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:49.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:49.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:49.958 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:50.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:50.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:26:50.219 00:26:50.219 --- 10.0.0.3 ping statistics --- 00:26:50.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.219 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:50.219 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:50.219 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:26:50.219 00:26:50.219 --- 10.0.0.4 ping statistics --- 00:26:50.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.219 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:50.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:26:50.219 00:26:50.219 --- 10.0.0.1 ping statistics --- 00:26:50.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.219 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:50.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:26:50.219 00:26:50.219 --- 10.0.0.2 ping statistics --- 00:26:50.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.219 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:50.219 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=106599 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 106599 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 106599 ']' 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.479 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:50.479 [2024-11-20 15:34:36.794408] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:50.479 [2024-11-20 15:34:36.795433] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:26:50.479 [2024-11-20 15:34:36.795482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.479 [2024-11-20 15:34:36.951936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.738 [2024-11-20 15:34:37.015094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.738 [2024-11-20 15:34:37.015156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.738 [2024-11-20 15:34:37.015167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.738 [2024-11-20 15:34:37.015176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.738 [2024-11-20 15:34:37.015183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.738 [2024-11-20 15:34:37.016666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.738 [2024-11-20 15:34:37.016743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.738 [2024-11-20 15:34:37.016957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.738 [2024-11-20 15:34:37.016958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.738 [2024-11-20 15:34:37.147139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:50.738 [2024-11-20 15:34:37.148133] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:50.738 [2024-11-20 15:34:37.148327] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:50.738 [2024-11-20 15:34:37.148437] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:50.738 [2024-11-20 15:34:37.149232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.307 [2024-11-20 15:34:37.744715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.307 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.566 Malloc0 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.566 [2024-11-20 15:34:37.851705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:51.566 test case1: single bdev can't be used in multiple subsystems 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.566 [2024-11-20 15:34:37.875234] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:26:51.566 [2024-11-20 15:34:37.875564] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:26:51.566 [2024-11-20 15:34:37.875808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.566 2024/11/20 15:34:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:51.566 request: 00:26:51.566 { 00:26:51.566 "method": "nvmf_subsystem_add_ns", 00:26:51.566 "params": { 00:26:51.566 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:26:51.566 "namespace": { 00:26:51.566 "bdev_name": "Malloc0", 00:26:51.566 "no_auto_visible": false 00:26:51.566 } 00:26:51.566 } 00:26:51.566 } 00:26:51.566 Got JSON-RPC error response 00:26:51.566 GoRPCClient: error on JSON-RPC call 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:51.566 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:26:51.567 Adding namespace failed - expected result. 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:26:51.567 test case2: host connect to nvmf target in multiple paths 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:51.567 [2024-11-20 15:34:37.887373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:51.567 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:26:51.826 15:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:26:51.826 15:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:26:51.826 15:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:51.826 15:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:51.826 15:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:26:53.728 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:53.728 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:53.728 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:53.728 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:53.728 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:53.728 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:26:53.728 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:53.728 [global] 00:26:53.728 thread=1 00:26:53.728 invalidate=1 00:26:53.728 rw=write 00:26:53.728 time_based=1 00:26:53.728 runtime=1 00:26:53.728 ioengine=libaio 00:26:53.728 direct=1 00:26:53.728 bs=4096 00:26:53.728 iodepth=1 00:26:53.728 norandommap=0 00:26:53.728 numjobs=1 00:26:53.728 00:26:53.728 verify_dump=1 00:26:53.728 verify_backlog=512 00:26:53.728 verify_state_save=0 00:26:53.728 do_verify=1 00:26:53.728 verify=crc32c-intel 00:26:53.728 [job0] 00:26:53.728 filename=/dev/nvme0n1 00:26:53.728 Could not set queue depth (nvme0n1) 00:26:53.986 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:53.986 fio-3.35 00:26:53.986 Starting 1 thread 00:26:55.360 00:26:55.360 job0: (groupid=0, jobs=1): err= 0: pid=106704: Wed Nov 20 15:34:41 2024 00:26:55.360 read: IOPS=3251, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1000msec) 00:26:55.360 slat (nsec): min=9825, max=28224, avg=11118.02, stdev=1589.22 00:26:55.360 clat (usec): min=125, max=418, avg=157.61, stdev=14.08 00:26:55.360 lat (usec): min=137, max=429, avg=168.73, stdev=14.15 00:26:55.360 clat percentiles (usec): 00:26:55.360 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:26:55.360 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:26:55.360 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:26:55.360 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 243], 99.95th=[ 277], 00:26:55.360 | 99.99th=[ 420] 00:26:55.360 write: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1000msec); 0 zone resets 00:26:55.360 slat (usec): min=14, max=155, avg=17.40, stdev= 5.30 00:26:55.360 clat (usec): min=81, max=366, avg=106.92, stdev=11.12 00:26:55.360 lat (usec): min=100, max=384, avg=124.33, stdev=13.38 00:26:55.360 clat percentiles (usec): 00:26:55.360 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 99], 00:26:55.360 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 109], 00:26:55.360 | 70.00th=[ 111], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 127], 00:26:55.360 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 159], 99.95th=[ 229], 00:26:55.360 | 99.99th=[ 367] 00:26:55.360 bw ( KiB/s): min=15032, max=15032, per=100.00%, avg=15032.00, stdev= 0.00, samples=1 00:26:55.360 iops : min= 3758, max= 3758, avg=3758.00, stdev= 0.00, samples=1 00:26:55.360 lat (usec) : 100=13.20%, 250=86.74%, 500=0.06% 00:26:55.360 cpu : usr=1.30%, sys=6.80%, ctx=6835, majf=0, minf=5 00:26:55.360 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.360 issued rwts: total=3251,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.360 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:55.360 00:26:55.360 Run status group 0 (all jobs): 00:26:55.360 READ: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1000-1000msec 00:26:55.360 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1000-1000msec 00:26:55.360 00:26:55.360 Disk stats (read/write): 00:26:55.360 nvme0n1: ios=3066/3072, merge=0/0, ticks=515/354, in_queue=869, util=91.57% 00:26:55.360 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:55.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:26:55.360 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:55.360 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.360 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.360 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:55.360 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.360 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:55.360 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.361 rmmod nvme_tcp 00:26:55.361 rmmod nvme_fabrics 00:26:55.361 rmmod nvme_keyring 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 106599 ']' 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 106599 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 106599 ']' 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 106599 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106599 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.361 killing process with pid 106599 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106599' 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 106599 00:26:55.361 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 106599 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:55.620 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:26:55.880 00:26:55.880 real 0m6.462s 00:26:55.880 user 0m15.461s 00:26:55.880 sys 0m2.402s 00:26:55.880 ************************************ 00:26:55.880 END TEST nvmf_nmic 00:26:55.880 ************************************ 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.880 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:56.139 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:26:56.139 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:56.139 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.139 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:56.139 ************************************ 00:26:56.139 START TEST nvmf_fio_target 00:26:56.139 ************************************ 00:26:56.139 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:26:56.139 * Looking for test storage... 00:26:56.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:56.139 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:56.139 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:56.139 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:56.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.400 --rc genhtml_branch_coverage=1 00:26:56.400 --rc genhtml_function_coverage=1 00:26:56.400 --rc genhtml_legend=1 00:26:56.400 --rc geninfo_all_blocks=1 00:26:56.400 --rc geninfo_unexecuted_blocks=1 00:26:56.400 00:26:56.400 ' 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:56.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.400 --rc genhtml_branch_coverage=1 00:26:56.400 --rc genhtml_function_coverage=1 00:26:56.400 --rc genhtml_legend=1 00:26:56.400 --rc geninfo_all_blocks=1 00:26:56.400 --rc geninfo_unexecuted_blocks=1 00:26:56.400 00:26:56.400 ' 00:26:56.400 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:56.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.400 --rc genhtml_branch_coverage=1 00:26:56.400 --rc genhtml_function_coverage=1 00:26:56.400 --rc genhtml_legend=1 00:26:56.400 --rc geninfo_all_blocks=1 00:26:56.400 --rc geninfo_unexecuted_blocks=1 00:26:56.400 00:26:56.400 ' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:56.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.401 --rc genhtml_branch_coverage=1 00:26:56.401 --rc genhtml_function_coverage=1 00:26:56.401 --rc genhtml_legend=1 00:26:56.401 --rc geninfo_all_blocks=1 00:26:56.401 --rc geninfo_unexecuted_blocks=1 00:26:56.401 00:26:56.401 ' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:56.401 Cannot find device "nvmf_init_br" 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:56.401 Cannot find device "nvmf_init_br2" 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:26:56.401 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:56.401 Cannot find device "nvmf_tgt_br" 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:56.402 Cannot find device "nvmf_tgt_br2" 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:56.402 Cannot find device "nvmf_init_br" 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:56.402 Cannot find device "nvmf_init_br2" 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:56.402 Cannot find device "nvmf_tgt_br" 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:56.402 Cannot find device "nvmf_tgt_br2" 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:56.402 Cannot find device "nvmf_br" 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:56.402 Cannot find device "nvmf_init_if" 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:26:56.402 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:56.662 Cannot find device "nvmf_init_if2" 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:56.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:56.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:56.662 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:56.662 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:56.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:56.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:26:56.922 00:26:56.922 --- 10.0.0.3 ping statistics --- 00:26:56.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.922 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:56.922 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:56.922 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:56.922 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:26:56.923 00:26:56.923 --- 10.0.0.4 ping statistics --- 00:26:56.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.923 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:56.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:26:56.923 00:26:56.923 --- 10.0.0.1 ping statistics --- 00:26:56.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.923 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:56.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:26:56.923 00:26:56.923 --- 10.0.0.2 ping statistics --- 00:26:56.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.923 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=106938 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 106938 00:26:56.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 106938 ']' 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.923 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:56.923 [2024-11-20 15:34:43.323143] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:56.923 [2024-11-20 15:34:43.324186] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:26:56.923 [2024-11-20 15:34:43.324256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.183 [2024-11-20 15:34:43.479404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:57.183 [2024-11-20 15:34:43.538933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.183 [2024-11-20 15:34:43.538980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.183 [2024-11-20 15:34:43.538990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.183 [2024-11-20 15:34:43.538998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.183 [2024-11-20 15:34:43.539005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.183 [2024-11-20 15:34:43.540303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.183 [2024-11-20 15:34:43.540537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.183 [2024-11-20 15:34:43.541332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.183 [2024-11-20 15:34:43.541333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.442 [2024-11-20 15:34:43.664443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:57.442 [2024-11-20 15:34:43.664693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:57.442 [2024-11-20 15:34:43.665403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:57.442 [2024-11-20 15:34:43.665474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:57.442 [2024-11-20 15:34:43.666165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:58.011 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.011 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:26:58.012 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:58.012 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:58.012 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.012 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:58.012 [2024-11-20 15:34:44.450259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.271 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:58.530 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:26:58.530 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:58.530 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:26:58.530 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:58.789 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:26:58.789 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:59.049 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:26:59.049 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:26:59.308 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:59.567 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:26:59.567 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:59.826 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:26:59.826 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:00.085 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:27:00.085 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:27:00.345 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:00.603 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:27:00.603 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:00.861 15:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:27:00.861 15:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:00.861 15:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:01.119 [2024-11-20 15:34:47.518187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:01.119 15:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:27:01.376 15:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:27:01.633 15:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:27:01.633 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:27:01.633 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:27:01.633 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.633 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:27:01.633 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:27:01.633 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:27:04.163 15:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:04.163 15:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:04.163 15:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:04.163 15:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:27:04.163 15:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:04.163 15:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:27:04.163 15:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:27:04.163 [global] 00:27:04.163 thread=1 00:27:04.163 invalidate=1 00:27:04.163 rw=write 00:27:04.163 time_based=1 00:27:04.163 runtime=1 00:27:04.163 ioengine=libaio 00:27:04.163 direct=1 00:27:04.163 bs=4096 00:27:04.163 iodepth=1 00:27:04.163 norandommap=0 00:27:04.163 numjobs=1 00:27:04.163 00:27:04.163 verify_dump=1 00:27:04.163 verify_backlog=512 00:27:04.163 verify_state_save=0 00:27:04.163 do_verify=1 00:27:04.163 verify=crc32c-intel 00:27:04.163 [job0] 00:27:04.163 filename=/dev/nvme0n1 00:27:04.163 [job1] 00:27:04.163 filename=/dev/nvme0n2 00:27:04.163 [job2] 00:27:04.163 filename=/dev/nvme0n3 00:27:04.163 [job3] 00:27:04.163 filename=/dev/nvme0n4 00:27:04.163 Could not set queue depth (nvme0n1) 00:27:04.163 Could not set queue depth (nvme0n2) 00:27:04.163 Could not set queue depth (nvme0n3) 00:27:04.163 Could not set queue depth (nvme0n4) 00:27:04.163 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:04.163 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:04.163 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:04.163 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:04.163 fio-3.35 00:27:04.163 Starting 4 threads 00:27:05.099 00:27:05.099 job0: (groupid=0, jobs=1): err= 0: pid=107226: Wed Nov 20 15:34:51 2024 00:27:05.099 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:27:05.099 slat (nsec): min=6174, max=37039, avg=10796.91, stdev=2769.91 00:27:05.099 clat (usec): min=207, max=585, avg=327.67, stdev=35.99 00:27:05.099 lat (usec): min=216, max=593, avg=338.47, stdev=34.78 00:27:05.099 clat percentiles (usec): 00:27:05.099 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 302], 00:27:05.099 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:27:05.099 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 392], 00:27:05.099 | 99.00th=[ 453], 99.50th=[ 498], 99.90th=[ 578], 99.95th=[ 586], 00:27:05.099 | 99.99th=[ 586] 00:27:05.099 write: IOPS=1881, BW=7524KiB/s (7705kB/s)(7532KiB/1001msec); 0 zone resets 00:27:05.099 slat (usec): min=11, max=112, avg=22.94, stdev= 4.98 00:27:05.099 clat (usec): min=143, max=2968, avg=230.14, stdev=69.04 00:27:05.099 lat (usec): min=161, max=2994, avg=253.08, stdev=68.85 00:27:05.099 clat percentiles (usec): 00:27:05.099 | 1.00th=[ 190], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:27:05.099 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:27:05.099 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 273], 00:27:05.099 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 742], 99.95th=[ 2966], 00:27:05.099 | 99.99th=[ 2966] 00:27:05.099 bw ( KiB/s): min= 8192, max= 8192, per=26.86%, avg=8192.00, stdev= 0.00, samples=1 00:27:05.099 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:27:05.099 lat (usec) : 250=48.55%, 500=51.16%, 750=0.26% 00:27:05.099 lat (msec) : 4=0.03% 00:27:05.099 cpu : usr=0.80%, sys=4.50%, ctx=3419, majf=0, minf=9 00:27:05.099 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.099 issued rwts: total=1536,1883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.099 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:05.100 job1: (groupid=0, jobs=1): err= 0: pid=107227: Wed Nov 20 15:34:51 2024 00:27:05.100 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:27:05.100 slat (nsec): min=10505, max=27542, avg=12021.20, stdev=1728.71 00:27:05.100 clat (usec): min=195, max=1615, avg=326.99, stdev=47.49 00:27:05.100 lat (usec): min=207, max=1627, avg=339.02, stdev=47.42 00:27:05.100 clat percentiles (usec): 00:27:05.100 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 302], 00:27:05.100 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:27:05.100 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 379], 00:27:05.100 | 99.00th=[ 429], 99.50th=[ 510], 99.90th=[ 627], 99.95th=[ 1614], 00:27:05.100 | 99.99th=[ 1614] 00:27:05.100 write: IOPS=1889, BW=7556KiB/s (7738kB/s)(7564KiB/1001msec); 0 zone resets 00:27:05.100 slat (usec): min=14, max=129, avg=25.30, stdev= 7.21 00:27:05.100 clat (usec): min=140, max=733, avg=226.00, stdev=28.26 00:27:05.100 lat (usec): min=162, max=757, avg=251.30, stdev=29.12 00:27:05.100 clat percentiles (usec): 00:27:05.100 | 1.00th=[ 172], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:27:05.100 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:27:05.100 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 269], 00:27:05.100 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 725], 99.95th=[ 734], 00:27:05.100 | 99.99th=[ 734] 00:27:05.100 bw ( KiB/s): min= 8192, max= 8192, per=26.86%, avg=8192.00, stdev= 0.00, samples=1 00:27:05.100 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:27:05.100 lat (usec) : 250=50.16%, 500=49.46%, 750=0.35% 00:27:05.100 lat (msec) : 2=0.03% 00:27:05.100 cpu : usr=0.70%, sys=5.10%, ctx=3428, majf=0, minf=13 00:27:05.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.100 issued rwts: total=1536,1891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:05.100 job2: (groupid=0, jobs=1): err= 0: pid=107228: Wed Nov 20 15:34:51 2024 00:27:05.100 read: IOPS=1718, BW=6873KiB/s (7038kB/s)(6880KiB/1001msec) 00:27:05.100 slat (nsec): min=10071, max=31279, avg=11856.18, stdev=2020.26 00:27:05.100 clat (usec): min=198, max=470, avg=289.75, stdev=40.99 00:27:05.100 lat (usec): min=209, max=481, avg=301.60, stdev=41.38 00:27:05.100 clat percentiles (usec): 00:27:05.100 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:27:05.100 | 30.00th=[ 249], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:27:05.100 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 338], 00:27:05.100 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 433], 99.95th=[ 469], 00:27:05.100 | 99.99th=[ 469] 00:27:05.100 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:27:05.100 slat (usec): min=15, max=130, avg=24.22, stdev= 7.37 00:27:05.100 clat (usec): min=126, max=443, avg=208.71, stdev=31.64 00:27:05.100 lat (usec): min=143, max=470, avg=232.93, stdev=35.67 00:27:05.100 clat percentiles (usec): 00:27:05.100 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:27:05.100 | 30.00th=[ 202], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 223], 00:27:05.100 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 239], 95.00th=[ 247], 00:27:05.100 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 367], 99.95th=[ 383], 00:27:05.100 | 99.99th=[ 445] 00:27:05.100 bw ( KiB/s): min= 8192, max= 8192, per=26.86%, avg=8192.00, stdev= 0.00, samples=1 00:27:05.100 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:27:05.100 lat (usec) : 250=66.00%, 500=34.00% 00:27:05.100 cpu : usr=1.10%, sys=5.00%, ctx=3768, majf=0, minf=13 00:27:05.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.100 issued rwts: total=1720,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:05.100 job3: (groupid=0, jobs=1): err= 0: pid=107229: Wed Nov 20 15:34:51 2024 00:27:05.100 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:27:05.100 slat (nsec): min=7375, max=30041, avg=10497.55, stdev=2629.53 00:27:05.100 clat (usec): min=211, max=2163, avg=332.52, stdev=58.47 00:27:05.100 lat (usec): min=228, max=2173, avg=343.02, stdev=58.81 00:27:05.100 clat percentiles (usec): 00:27:05.100 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:27:05.100 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 330], 00:27:05.100 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 388], 00:27:05.100 | 99.00th=[ 465], 99.50th=[ 515], 99.90th=[ 742], 99.95th=[ 2180], 00:27:05.100 | 99.99th=[ 2180] 00:27:05.100 write: IOPS=1808, BW=7233KiB/s (7406kB/s)(7240KiB/1001msec); 0 zone resets 00:27:05.100 slat (usec): min=12, max=130, avg=22.99, stdev= 7.05 00:27:05.100 clat (usec): min=119, max=3891, avg=236.21, stdev=112.86 00:27:05.100 lat (usec): min=135, max=3954, avg=259.20, stdev=113.50 00:27:05.100 clat percentiles (usec): 00:27:05.100 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:27:05.100 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:27:05.100 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 269], 00:27:05.100 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 2966], 99.95th=[ 3884], 00:27:05.100 | 99.99th=[ 3884] 00:27:05.100 bw ( KiB/s): min= 8192, max= 8192, per=26.86%, avg=8192.00, stdev= 0.00, samples=1 00:27:05.100 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:27:05.100 lat (usec) : 250=46.53%, 500=53.08%, 750=0.27% 00:27:05.100 lat (msec) : 2=0.03%, 4=0.09% 00:27:05.100 cpu : usr=1.20%, sys=4.50%, ctx=3346, majf=0, minf=11 00:27:05.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.100 issued rwts: total=1536,1810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:05.100 00:27:05.100 Run status group 0 (all jobs): 00:27:05.100 READ: bw=24.7MiB/s (25.9MB/s), 6138KiB/s-6873KiB/s (6285kB/s-7038kB/s), io=24.7MiB (25.9MB), run=1001-1001msec 00:27:05.100 WRITE: bw=29.8MiB/s (31.2MB/s), 7233KiB/s-8184KiB/s (7406kB/s-8380kB/s), io=29.8MiB (31.3MB), run=1001-1001msec 00:27:05.100 00:27:05.100 Disk stats (read/write): 00:27:05.100 nvme0n1: ios=1481/1536, merge=0/0, ticks=487/364, in_queue=851, util=88.25% 00:27:05.100 nvme0n2: ios=1476/1536, merge=0/0, ticks=505/348, in_queue=853, util=89.25% 00:27:05.100 nvme0n3: ios=1550/1540, merge=0/0, ticks=479/357, in_queue=836, util=89.19% 00:27:05.100 nvme0n4: ios=1361/1536, merge=0/0, ticks=458/364, in_queue=822, util=89.29% 00:27:05.100 15:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:27:05.100 [global] 00:27:05.100 thread=1 00:27:05.100 invalidate=1 00:27:05.100 rw=randwrite 00:27:05.100 time_based=1 00:27:05.100 runtime=1 00:27:05.100 ioengine=libaio 00:27:05.100 direct=1 00:27:05.100 bs=4096 00:27:05.100 iodepth=1 00:27:05.100 norandommap=0 00:27:05.100 numjobs=1 00:27:05.100 00:27:05.100 verify_dump=1 00:27:05.100 verify_backlog=512 00:27:05.100 verify_state_save=0 00:27:05.100 do_verify=1 00:27:05.100 verify=crc32c-intel 00:27:05.100 [job0] 00:27:05.100 filename=/dev/nvme0n1 00:27:05.100 [job1] 00:27:05.100 filename=/dev/nvme0n2 00:27:05.100 [job2] 00:27:05.100 filename=/dev/nvme0n3 00:27:05.100 [job3] 00:27:05.100 filename=/dev/nvme0n4 00:27:05.359 Could not set queue depth (nvme0n1) 00:27:05.359 Could not set queue depth (nvme0n2) 00:27:05.359 Could not set queue depth (nvme0n3) 00:27:05.359 Could not set queue depth (nvme0n4) 00:27:05.359 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:05.359 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:05.359 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:05.359 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:05.359 fio-3.35 00:27:05.359 Starting 4 threads 00:27:06.806 00:27:06.806 job0: (groupid=0, jobs=1): err= 0: pid=107282: Wed Nov 20 15:34:52 2024 00:27:06.806 read: IOPS=990, BW=3960KiB/s (4055kB/s)(3964KiB/1001msec) 00:27:06.806 slat (nsec): min=9910, max=93289, avg=30949.47, stdev=6993.80 00:27:06.806 clat (usec): min=235, max=1749, avg=543.82, stdev=72.50 00:27:06.806 lat (usec): min=246, max=1780, avg=574.77, stdev=74.18 00:27:06.806 clat percentiles (usec): 00:27:06.806 | 1.00th=[ 258], 5.00th=[ 469], 10.00th=[ 494], 20.00th=[ 515], 00:27:06.806 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:27:06.806 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[ 603], 00:27:06.806 | 99.00th=[ 693], 99.50th=[ 766], 99.90th=[ 1745], 99.95th=[ 1745], 00:27:06.806 | 99.99th=[ 1745] 00:27:06.806 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:27:06.806 slat (usec): min=15, max=109, avg=45.34, stdev=11.11 00:27:06.806 clat (usec): min=130, max=1378, avg=368.71, stdev=107.68 00:27:06.806 lat (usec): min=167, max=1448, avg=414.04, stdev=113.83 00:27:06.806 clat percentiles (usec): 00:27:06.806 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 338], 00:27:06.806 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 412], 00:27:06.806 | 70.00th=[ 424], 80.00th=[ 437], 90.00th=[ 453], 95.00th=[ 474], 00:27:06.806 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[ 627], 99.95th=[ 1385], 00:27:06.806 | 99.99th=[ 1385] 00:27:06.806 bw ( KiB/s): min= 4368, max= 4368, per=21.10%, avg=4368.00, stdev= 0.00, samples=1 00:27:06.806 iops : min= 1092, max= 1092, avg=1092.00, stdev= 0.00, samples=1 00:27:06.806 lat (usec) : 250=9.43%, 500=46.70%, 750=43.52%, 1000=0.25% 00:27:06.806 lat (msec) : 2=0.10% 00:27:06.806 cpu : usr=1.00%, sys=6.40%, ctx=2016, majf=0, minf=13 00:27:06.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.806 issued rwts: total=991,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:06.806 job1: (groupid=0, jobs=1): err= 0: pid=107283: Wed Nov 20 15:34:52 2024 00:27:06.806 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:27:06.806 slat (nsec): min=6834, max=40969, avg=8411.52, stdev=3502.78 00:27:06.806 clat (usec): min=161, max=41372, avg=362.29, stdev=1047.79 00:27:06.806 lat (usec): min=169, max=41384, avg=370.71, stdev=1047.90 00:27:06.806 clat percentiles (usec): 00:27:06.806 | 1.00th=[ 227], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 310], 00:27:06.806 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:27:06.806 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 396], 00:27:06.806 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[41157], 00:27:06.806 | 99.99th=[41157] 00:27:06.806 write: IOPS=1569, BW=6278KiB/s (6428kB/s)(6284KiB/1001msec); 0 zone resets 00:27:06.806 slat (usec): min=9, max=105, avg=19.05, stdev= 7.51 00:27:06.806 clat (usec): min=103, max=1957, avg=252.86, stdev=55.47 00:27:06.806 lat (usec): min=130, max=1977, avg=271.90, stdev=56.55 00:27:06.806 clat percentiles (usec): 00:27:06.806 | 1.00th=[ 169], 5.00th=[ 200], 10.00th=[ 219], 20.00th=[ 231], 00:27:06.806 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:27:06.806 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 310], 00:27:06.806 | 99.00th=[ 343], 99.50th=[ 375], 99.90th=[ 783], 99.95th=[ 1958], 00:27:06.806 | 99.99th=[ 1958] 00:27:06.806 bw ( KiB/s): min= 8136, max= 8136, per=39.30%, avg=8136.00, stdev= 0.00, samples=1 00:27:06.806 iops : min= 2034, max= 2034, avg=2034.00, stdev= 0.00, samples=1 00:27:06.806 lat (usec) : 250=27.13%, 500=72.67%, 750=0.10%, 1000=0.03% 00:27:06.806 lat (msec) : 2=0.03%, 50=0.03% 00:27:06.806 cpu : usr=1.00%, sys=3.50%, ctx=3111, majf=0, minf=11 00:27:06.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.806 issued rwts: total=1536,1571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:06.806 job2: (groupid=0, jobs=1): err= 0: pid=107284: Wed Nov 20 15:34:52 2024 00:27:06.806 read: IOPS=983, BW=3932KiB/s (4026kB/s)(3936KiB/1001msec) 00:27:06.806 slat (nsec): min=10680, max=80420, avg=30895.11, stdev=7988.11 00:27:06.806 clat (usec): min=235, max=5837, avg=548.86, stdev=178.89 00:27:06.806 lat (usec): min=246, max=5869, avg=579.75, stdev=179.71 00:27:06.806 clat percentiles (usec): 00:27:06.806 | 1.00th=[ 253], 5.00th=[ 469], 10.00th=[ 498], 20.00th=[ 519], 00:27:06.806 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:27:06.806 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[ 603], 00:27:06.806 | 99.00th=[ 685], 99.50th=[ 734], 99.90th=[ 5866], 99.95th=[ 5866], 00:27:06.806 | 99.99th=[ 5866] 00:27:06.806 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:27:06.806 slat (nsec): min=15459, max=94665, avg=44812.81, stdev=9664.42 00:27:06.806 clat (usec): min=136, max=692, avg=368.47, stdev=103.13 00:27:06.806 lat (usec): min=163, max=738, avg=413.28, stdev=109.13 00:27:06.806 clat percentiles (usec): 00:27:06.806 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 351], 00:27:06.806 | 30.00th=[ 375], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 412], 00:27:06.806 | 70.00th=[ 424], 80.00th=[ 437], 90.00th=[ 453], 95.00th=[ 469], 00:27:06.806 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 660], 99.95th=[ 693], 00:27:06.806 | 99.99th=[ 693] 00:27:06.806 bw ( KiB/s): min= 4304, max= 4304, per=20.79%, avg=4304.00, stdev= 0.00, samples=1 00:27:06.806 iops : min= 1076, max= 1076, avg=1076.00, stdev= 0.00, samples=1 00:27:06.806 lat (usec) : 250=9.51%, 500=45.67%, 750=44.77% 00:27:06.806 lat (msec) : 10=0.05% 00:27:06.806 cpu : usr=1.30%, sys=6.10%, ctx=2008, majf=0, minf=17 00:27:06.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.806 issued rwts: total=984,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:06.806 job3: (groupid=0, jobs=1): err= 0: pid=107285: Wed Nov 20 15:34:52 2024 00:27:06.806 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:27:06.806 slat (nsec): min=6922, max=76576, avg=8517.74, stdev=3827.01 00:27:06.806 clat (usec): min=168, max=41306, avg=362.26, stdev=1046.14 00:27:06.806 lat (usec): min=176, max=41322, avg=370.78, stdev=1046.37 00:27:06.806 clat percentiles (usec): 00:27:06.806 | 1.00th=[ 225], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 310], 00:27:06.806 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:27:06.806 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 400], 00:27:06.806 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 537], 99.95th=[41157], 00:27:06.806 | 99.99th=[41157] 00:27:06.806 write: IOPS=1560, BW=6242KiB/s (6392kB/s)(6248KiB/1001msec); 0 zone resets 00:27:06.806 slat (nsec): min=10104, max=65648, avg=19348.27, stdev=6789.00 00:27:06.806 clat (usec): min=112, max=1875, avg=253.98, stdev=60.53 00:27:06.806 lat (usec): min=135, max=1891, avg=273.33, stdev=61.14 00:27:06.806 clat percentiles (usec): 00:27:06.806 | 1.00th=[ 172], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 229], 00:27:06.806 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:27:06.806 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 310], 00:27:06.806 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 1303], 99.95th=[ 1876], 00:27:06.806 | 99.99th=[ 1876] 00:27:06.806 bw ( KiB/s): min= 8088, max= 8088, per=39.07%, avg=8088.00, stdev= 0.00, samples=1 00:27:06.806 iops : min= 2022, max= 2022, avg=2022.00, stdev= 0.00, samples=1 00:27:06.806 lat (usec) : 250=26.63%, 500=73.05%, 750=0.19%, 1000=0.03% 00:27:06.806 lat (msec) : 2=0.06%, 50=0.03% 00:27:06.806 cpu : usr=1.00%, sys=3.60%, ctx=3105, majf=0, minf=7 00:27:06.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.806 issued rwts: total=1536,1562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:06.806 00:27:06.806 Run status group 0 (all jobs): 00:27:06.806 READ: bw=19.7MiB/s (20.7MB/s), 3932KiB/s-6138KiB/s (4026kB/s-6285kB/s), io=19.7MiB (20.7MB), run=1001-1001msec 00:27:06.806 WRITE: bw=20.2MiB/s (21.2MB/s), 4092KiB/s-6278KiB/s (4190kB/s-6428kB/s), io=20.2MiB (21.2MB), run=1001-1001msec 00:27:06.806 00:27:06.806 Disk stats (read/write): 00:27:06.806 nvme0n1: ios=825/1024, merge=0/0, ticks=448/398, in_queue=846, util=89.08% 00:27:06.806 nvme0n2: ios=1229/1536, merge=0/0, ticks=500/386, in_queue=886, util=90.41% 00:27:06.806 nvme0n3: ios=801/1024, merge=0/0, ticks=439/404, in_queue=843, util=90.16% 00:27:06.806 nvme0n4: ios=1175/1536, merge=0/0, ticks=432/382, in_queue=814, util=89.80% 00:27:06.806 15:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:27:06.806 [global] 00:27:06.806 thread=1 00:27:06.806 invalidate=1 00:27:06.806 rw=write 00:27:06.806 time_based=1 00:27:06.806 runtime=1 00:27:06.806 ioengine=libaio 00:27:06.806 direct=1 00:27:06.806 bs=4096 00:27:06.806 iodepth=128 00:27:06.806 norandommap=0 00:27:06.806 numjobs=1 00:27:06.806 00:27:06.806 verify_dump=1 00:27:06.806 verify_backlog=512 00:27:06.806 verify_state_save=0 00:27:06.806 do_verify=1 00:27:06.806 verify=crc32c-intel 00:27:06.806 [job0] 00:27:06.806 filename=/dev/nvme0n1 00:27:06.806 [job1] 00:27:06.806 filename=/dev/nvme0n2 00:27:06.806 [job2] 00:27:06.806 filename=/dev/nvme0n3 00:27:06.806 [job3] 00:27:06.806 filename=/dev/nvme0n4 00:27:06.806 Could not set queue depth (nvme0n1) 00:27:06.806 Could not set queue depth (nvme0n2) 00:27:06.806 Could not set queue depth (nvme0n3) 00:27:06.806 Could not set queue depth (nvme0n4) 00:27:06.806 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:06.806 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:06.806 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:06.806 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:06.806 fio-3.35 00:27:06.806 Starting 4 threads 00:27:08.213 00:27:08.213 job0: (groupid=0, jobs=1): err= 0: pid=107344: Wed Nov 20 15:34:54 2024 00:27:08.213 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:27:08.213 slat (usec): min=18, max=8476, avg=193.59, stdev=1020.28 00:27:08.213 clat (usec): min=7384, max=34182, avg=24823.04, stdev=3186.30 00:27:08.213 lat (usec): min=7403, max=34227, avg=25016.62, stdev=3276.73 00:27:08.213 clat percentiles (usec): 00:27:08.213 | 1.00th=[ 8029], 5.00th=[19530], 10.00th=[22414], 20.00th=[23725], 00:27:08.213 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25560], 00:27:08.213 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27919], 95.00th=[28967], 00:27:08.213 | 99.00th=[32113], 99.50th=[32900], 99.90th=[33424], 99.95th=[33817], 00:27:08.213 | 99.99th=[34341] 00:27:08.213 write: IOPS=2550, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:27:08.213 slat (usec): min=18, max=8214, avg=186.28, stdev=887.23 00:27:08.213 clat (usec): min=466, max=33235, avg=24621.39, stdev=2258.47 00:27:08.213 lat (usec): min=7052, max=33291, avg=24807.66, stdev=2386.92 00:27:08.213 clat percentiles (usec): 00:27:08.213 | 1.00th=[18482], 5.00th=[22152], 10.00th=[22938], 20.00th=[23462], 00:27:08.213 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:27:08.213 | 70.00th=[25035], 80.00th=[25560], 90.00th=[26870], 95.00th=[28443], 00:27:08.213 | 99.00th=[31589], 99.50th=[32637], 99.90th=[33162], 99.95th=[33162], 00:27:08.213 | 99.99th=[33162] 00:27:08.213 bw ( KiB/s): min= 9008, max=11472, per=20.14%, avg=10240.00, stdev=1742.31, samples=2 00:27:08.213 iops : min= 2252, max= 2868, avg=2560.00, stdev=435.58, samples=2 00:27:08.213 lat (usec) : 500=0.02% 00:27:08.213 lat (msec) : 10=0.82%, 20=3.57%, 50=95.59% 00:27:08.213 cpu : usr=3.28%, sys=10.54%, ctx=226, majf=0, minf=7 00:27:08.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:08.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:08.213 issued rwts: total=2560,2568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:08.213 job1: (groupid=0, jobs=1): err= 0: pid=107345: Wed Nov 20 15:34:54 2024 00:27:08.213 read: IOPS=4842, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1003msec) 00:27:08.213 slat (usec): min=9, max=4216, avg=92.94, stdev=405.27 00:27:08.213 clat (usec): min=2472, max=17276, avg=12860.28, stdev=1558.15 00:27:08.213 lat (usec): min=2508, max=18008, avg=12953.23, stdev=1570.51 00:27:08.213 clat percentiles (usec): 00:27:08.213 | 1.00th=[ 6587], 5.00th=[11076], 10.00th=[11338], 20.00th=[11863], 00:27:08.213 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:27:08.213 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[15139], 00:27:08.213 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16712], 99.95th=[16712], 00:27:08.213 | 99.99th=[17171] 00:27:08.213 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:27:08.213 slat (usec): min=18, max=4577, avg=95.07, stdev=401.71 00:27:08.213 clat (usec): min=9004, max=16038, avg=12549.39, stdev=1377.78 00:27:08.213 lat (usec): min=9037, max=16071, avg=12644.47, stdev=1355.28 00:27:08.213 clat percentiles (usec): 00:27:08.213 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[11076], 00:27:08.213 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:27:08.213 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14222], 95.00th=[14484], 00:27:08.213 | 99.00th=[15008], 99.50th=[15533], 99.90th=[15926], 99.95th=[16057], 00:27:08.213 | 99.99th=[16057] 00:27:08.213 bw ( KiB/s): min=20480, max=20521, per=40.33%, avg=20500.50, stdev=28.99, samples=2 00:27:08.213 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:27:08.213 lat (msec) : 4=0.29%, 10=2.71%, 20=97.00% 00:27:08.213 cpu : usr=5.69%, sys=21.06%, ctx=488, majf=0, minf=10 00:27:08.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:08.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:08.213 issued rwts: total=4857,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:08.213 job2: (groupid=0, jobs=1): err= 0: pid=107346: Wed Nov 20 15:34:54 2024 00:27:08.213 read: IOPS=2043, BW=8175KiB/s (8372kB/s)(8200KiB/1003msec) 00:27:08.213 slat (usec): min=7, max=6055, avg=217.07, stdev=788.19 00:27:08.213 clat (usec): min=592, max=36033, avg=27527.16, stdev=2609.55 00:27:08.213 lat (usec): min=4937, max=36055, avg=27744.23, stdev=2595.13 00:27:08.213 clat percentiles (usec): 00:27:08.213 | 1.00th=[21627], 5.00th=[23987], 10.00th=[24511], 20.00th=[25560], 00:27:08.213 | 30.00th=[26084], 40.00th=[26870], 50.00th=[27395], 60.00th=[27919], 00:27:08.213 | 70.00th=[28705], 80.00th=[29230], 90.00th=[31065], 95.00th=[32375], 00:27:08.213 | 99.00th=[33162], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:27:08.213 | 99.99th=[35914] 00:27:08.213 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:27:08.213 slat (usec): min=10, max=7555, avg=205.66, stdev=884.40 00:27:08.213 clat (usec): min=4976, max=35715, avg=27157.86, stdev=4142.92 00:27:08.213 lat (usec): min=5008, max=35748, avg=27363.52, stdev=4101.54 00:27:08.213 clat percentiles (usec): 00:27:08.213 | 1.00th=[ 7242], 5.00th=[21103], 10.00th=[22676], 20.00th=[25297], 00:27:08.213 | 30.00th=[26346], 40.00th=[27132], 50.00th=[27657], 60.00th=[28181], 00:27:08.213 | 70.00th=[28705], 80.00th=[29754], 90.00th=[31327], 95.00th=[32113], 00:27:08.213 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35914], 00:27:08.213 | 99.99th=[35914] 00:27:08.213 bw ( KiB/s): min= 9288, max=10184, per=19.15%, avg=9736.00, stdev=633.57, samples=2 00:27:08.213 iops : min= 2322, max= 2546, avg=2434.00, stdev=158.39, samples=2 00:27:08.213 lat (usec) : 750=0.02% 00:27:08.213 lat (msec) : 10=0.85%, 20=1.48%, 50=97.66% 00:27:08.213 cpu : usr=2.50%, sys=10.48%, ctx=616, majf=0, minf=11 00:27:08.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:08.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:08.213 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:08.213 job3: (groupid=0, jobs=1): err= 0: pid=107347: Wed Nov 20 15:34:54 2024 00:27:08.213 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:27:08.213 slat (usec): min=7, max=6956, avg=216.38, stdev=798.62 00:27:08.213 clat (usec): min=21348, max=35414, avg=28143.81, stdev=2495.20 00:27:08.213 lat (usec): min=21379, max=35434, avg=28360.18, stdev=2474.85 00:27:08.213 clat percentiles (usec): 00:27:08.213 | 1.00th=[22938], 5.00th=[23987], 10.00th=[25035], 20.00th=[26084], 00:27:08.213 | 30.00th=[26870], 40.00th=[27132], 50.00th=[28181], 60.00th=[28705], 00:27:08.213 | 70.00th=[29492], 80.00th=[30016], 90.00th=[31589], 95.00th=[32637], 00:27:08.213 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:27:08.213 | 99.99th=[35390] 00:27:08.213 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(9.96MiB/1005msec); 0 zone resets 00:27:08.214 slat (usec): min=23, max=7616, avg=206.83, stdev=882.13 00:27:08.214 clat (usec): min=4360, max=34882, avg=26791.66, stdev=4139.87 00:27:08.214 lat (usec): min=4394, max=34920, avg=26998.49, stdev=4098.61 00:27:08.214 clat percentiles (usec): 00:27:08.214 | 1.00th=[ 5342], 5.00th=[21365], 10.00th=[22414], 20.00th=[24773], 00:27:08.214 | 30.00th=[26346], 40.00th=[26870], 50.00th=[27657], 60.00th=[28181], 00:27:08.214 | 70.00th=[28443], 80.00th=[28967], 90.00th=[30802], 95.00th=[31851], 00:27:08.214 | 99.00th=[33162], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:27:08.214 | 99.99th=[34866] 00:27:08.214 bw ( KiB/s): min= 9058, max=10224, per=18.96%, avg=9641.00, stdev=824.49, samples=2 00:27:08.214 iops : min= 2264, max= 2556, avg=2410.00, stdev=206.48, samples=2 00:27:08.214 lat (msec) : 10=0.67%, 20=1.44%, 50=97.89% 00:27:08.214 cpu : usr=2.79%, sys=10.56%, ctx=592, majf=0, minf=10 00:27:08.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:08.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:08.214 issued rwts: total=2048,2550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:08.214 00:27:08.214 Run status group 0 (all jobs): 00:27:08.214 READ: bw=44.7MiB/s (46.8MB/s), 8151KiB/s-18.9MiB/s (8347kB/s-19.8MB/s), io=45.0MiB (47.2MB), run=1003-1007msec 00:27:08.214 WRITE: bw=49.6MiB/s (52.1MB/s), 9.91MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=50.0MiB (52.4MB), run=1003-1007msec 00:27:08.214 00:27:08.214 Disk stats (read/write): 00:27:08.214 nvme0n1: ios=2098/2365, merge=0/0, ticks=16352/17279, in_queue=33631, util=88.87% 00:27:08.214 nvme0n2: ios=4145/4525, merge=0/0, ticks=15930/14594, in_queue=30524, util=90.20% 00:27:08.214 nvme0n3: ios=1915/2048, merge=0/0, ticks=12410/11831, in_queue=24241, util=88.72% 00:27:08.214 nvme0n4: ios=1894/2048, merge=0/0, ticks=12522/11967, in_queue=24489, util=88.45% 00:27:08.214 15:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:27:08.214 [global] 00:27:08.214 thread=1 00:27:08.214 invalidate=1 00:27:08.214 rw=randwrite 00:27:08.214 time_based=1 00:27:08.214 runtime=1 00:27:08.214 ioengine=libaio 00:27:08.214 direct=1 00:27:08.214 bs=4096 00:27:08.214 iodepth=128 00:27:08.214 norandommap=0 00:27:08.214 numjobs=1 00:27:08.214 00:27:08.214 verify_dump=1 00:27:08.214 verify_backlog=512 00:27:08.214 verify_state_save=0 00:27:08.214 do_verify=1 00:27:08.214 verify=crc32c-intel 00:27:08.214 [job0] 00:27:08.214 filename=/dev/nvme0n1 00:27:08.214 [job1] 00:27:08.214 filename=/dev/nvme0n2 00:27:08.214 [job2] 00:27:08.214 filename=/dev/nvme0n3 00:27:08.214 [job3] 00:27:08.214 filename=/dev/nvme0n4 00:27:08.214 Could not set queue depth (nvme0n1) 00:27:08.214 Could not set queue depth (nvme0n2) 00:27:08.214 Could not set queue depth (nvme0n3) 00:27:08.214 Could not set queue depth (nvme0n4) 00:27:08.214 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:08.214 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:08.214 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:08.214 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:08.214 fio-3.35 00:27:08.214 Starting 4 threads 00:27:09.593 00:27:09.593 job0: (groupid=0, jobs=1): err= 0: pid=107403: Wed Nov 20 15:34:55 2024 00:27:09.593 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:27:09.593 slat (usec): min=9, max=6615, avg=149.96, stdev=648.35 00:27:09.593 clat (usec): min=12929, max=31533, avg=19742.71, stdev=3672.29 00:27:09.593 lat (usec): min=14079, max=31570, avg=19892.67, stdev=3679.37 00:27:09.593 clat percentiles (usec): 00:27:09.593 | 1.00th=[14484], 5.00th=[15401], 10.00th=[16188], 20.00th=[16909], 00:27:09.593 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18482], 60.00th=[18744], 00:27:09.593 | 70.00th=[20317], 80.00th=[23725], 90.00th=[25560], 95.00th=[26870], 00:27:09.593 | 99.00th=[30540], 99.50th=[30540], 99.90th=[31589], 99.95th=[31589], 00:27:09.593 | 99.99th=[31589] 00:27:09.593 write: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1004msec); 0 zone resets 00:27:09.593 slat (usec): min=10, max=8348, avg=145.83, stdev=613.87 00:27:09.593 clat (usec): min=3493, max=29878, avg=18929.13, stdev=3927.47 00:27:09.593 lat (usec): min=3526, max=29914, avg=19074.96, stdev=3930.32 00:27:09.593 clat percentiles (usec): 00:27:09.593 | 1.00th=[ 7373], 5.00th=[13960], 10.00th=[14484], 20.00th=[16057], 00:27:09.593 | 30.00th=[16909], 40.00th=[17695], 50.00th=[18220], 60.00th=[19006], 00:27:09.593 | 70.00th=[20317], 80.00th=[22676], 90.00th=[24773], 95.00th=[25560], 00:27:09.593 | 99.00th=[28181], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:27:09.593 | 99.99th=[29754] 00:27:09.593 bw ( KiB/s): min=12312, max=14107, per=27.41%, avg=13209.50, stdev=1269.26, samples=2 00:27:09.593 iops : min= 3078, max= 3526, avg=3302.00, stdev=316.78, samples=2 00:27:09.593 lat (msec) : 4=0.11%, 10=0.74%, 20=67.61%, 50=31.54% 00:27:09.593 cpu : usr=3.49%, sys=13.56%, ctx=402, majf=0, minf=6 00:27:09.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:09.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:09.593 issued rwts: total=3072,3430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:09.594 job1: (groupid=0, jobs=1): err= 0: pid=107404: Wed Nov 20 15:34:55 2024 00:27:09.594 read: IOPS=2246, BW=8985KiB/s (9201kB/s)(9048KiB/1007msec) 00:27:09.594 slat (usec): min=7, max=10222, avg=210.83, stdev=954.39 00:27:09.594 clat (usec): min=1210, max=46795, avg=26711.70, stdev=10708.76 00:27:09.594 lat (usec): min=7287, max=48094, avg=26922.53, stdev=10781.35 00:27:09.594 clat percentiles (usec): 00:27:09.594 | 1.00th=[10028], 5.00th=[13566], 10.00th=[14877], 20.00th=[15926], 00:27:09.594 | 30.00th=[16712], 40.00th=[17695], 50.00th=[28181], 60.00th=[33424], 00:27:09.594 | 70.00th=[35914], 80.00th=[37487], 90.00th=[39584], 95.00th=[42206], 00:27:09.594 | 99.00th=[45876], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:27:09.594 | 99.99th=[46924] 00:27:09.594 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:27:09.594 slat (usec): min=22, max=8772, avg=194.76, stdev=784.31 00:27:09.594 clat (usec): min=10236, max=45291, avg=26127.73, stdev=10541.83 00:27:09.594 lat (usec): min=10271, max=45323, avg=26322.49, stdev=10598.90 00:27:09.594 clat percentiles (usec): 00:27:09.594 | 1.00th=[11076], 5.00th=[12125], 10.00th=[13960], 20.00th=[15270], 00:27:09.594 | 30.00th=[15795], 40.00th=[17957], 50.00th=[25560], 60.00th=[28967], 00:27:09.594 | 70.00th=[35914], 80.00th=[37487], 90.00th=[40109], 95.00th=[41681], 00:27:09.594 | 99.00th=[43779], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:27:09.594 | 99.99th=[45351] 00:27:09.594 bw ( KiB/s): min= 8175, max=12288, per=21.23%, avg=10231.50, stdev=2908.33, samples=2 00:27:09.594 iops : min= 2043, max= 3072, avg=2557.50, stdev=727.61, samples=2 00:27:09.594 lat (msec) : 2=0.02%, 10=0.41%, 20=42.06%, 50=57.51% 00:27:09.594 cpu : usr=3.18%, sys=10.44%, ctx=482, majf=0, minf=1 00:27:09.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:09.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:09.594 issued rwts: total=2262,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:09.594 job2: (groupid=0, jobs=1): err= 0: pid=107405: Wed Nov 20 15:34:55 2024 00:27:09.594 read: IOPS=3458, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1005msec) 00:27:09.594 slat (usec): min=18, max=9258, avg=142.08, stdev=747.54 00:27:09.594 clat (usec): min=614, max=36071, avg=18003.31, stdev=4850.53 00:27:09.594 lat (usec): min=5390, max=36120, avg=18145.39, stdev=4908.92 00:27:09.594 clat percentiles (usec): 00:27:09.594 | 1.00th=[ 6325], 5.00th=[13566], 10.00th=[14615], 20.00th=[15401], 00:27:09.594 | 30.00th=[15664], 40.00th=[16450], 50.00th=[16712], 60.00th=[17171], 00:27:09.594 | 70.00th=[17433], 80.00th=[18220], 90.00th=[26084], 95.00th=[28967], 00:27:09.594 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35914], 00:27:09.594 | 99.99th=[35914] 00:27:09.594 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:27:09.594 slat (usec): min=15, max=9454, avg=130.01, stdev=628.20 00:27:09.594 clat (usec): min=10944, max=35273, avg=17781.50, stdev=4179.70 00:27:09.594 lat (usec): min=10977, max=35333, avg=17911.52, stdev=4203.32 00:27:09.594 clat percentiles (usec): 00:27:09.594 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13698], 20.00th=[15139], 00:27:09.594 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16712], 60.00th=[17171], 00:27:09.594 | 70.00th=[17695], 80.00th=[18482], 90.00th=[26346], 95.00th=[27395], 00:27:09.594 | 99.00th=[29754], 99.50th=[29754], 99.90th=[33817], 99.95th=[34341], 00:27:09.594 | 99.99th=[35390] 00:27:09.594 bw ( KiB/s): min=12312, max=16351, per=29.73%, avg=14331.50, stdev=2856.00, samples=2 00:27:09.594 iops : min= 3078, max= 4087, avg=3582.50, stdev=713.47, samples=2 00:27:09.594 lat (usec) : 750=0.01% 00:27:09.594 lat (msec) : 10=0.59%, 20=82.31%, 50=17.08% 00:27:09.594 cpu : usr=4.08%, sys=14.64%, ctx=354, majf=0, minf=9 00:27:09.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:09.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:09.594 issued rwts: total=3476,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:09.594 job3: (groupid=0, jobs=1): err= 0: pid=107406: Wed Nov 20 15:34:55 2024 00:27:09.594 read: IOPS=2276, BW=9108KiB/s (9326kB/s)(9144KiB/1004msec) 00:27:09.594 slat (usec): min=7, max=10156, avg=210.60, stdev=943.98 00:27:09.594 clat (usec): min=1550, max=45617, avg=26536.40, stdev=9839.05 00:27:09.594 lat (usec): min=7375, max=45640, avg=26747.01, stdev=9886.60 00:27:09.594 clat percentiles (usec): 00:27:09.594 | 1.00th=[ 9241], 5.00th=[15533], 10.00th=[16319], 20.00th=[17171], 00:27:09.594 | 30.00th=[17957], 40.00th=[18220], 50.00th=[23987], 60.00th=[32637], 00:27:09.594 | 70.00th=[35390], 80.00th=[37487], 90.00th=[39060], 95.00th=[40633], 00:27:09.594 | 99.00th=[43779], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:27:09.594 | 99.99th=[45876] 00:27:09.594 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:27:09.594 slat (usec): min=9, max=8819, avg=191.60, stdev=802.62 00:27:09.594 clat (usec): min=13391, max=45070, avg=25437.10, stdev=10250.39 00:27:09.594 lat (usec): min=13450, max=45760, avg=25628.70, stdev=10305.60 00:27:09.594 clat percentiles (usec): 00:27:09.594 | 1.00th=[13698], 5.00th=[14615], 10.00th=[15401], 20.00th=[16057], 00:27:09.594 | 30.00th=[16909], 40.00th=[18220], 50.00th=[19530], 60.00th=[27132], 00:27:09.594 | 70.00th=[35914], 80.00th=[37487], 90.00th=[39060], 95.00th=[41681], 00:27:09.594 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:27:09.594 | 99.99th=[44827] 00:27:09.594 bw ( KiB/s): min= 8192, max=12288, per=21.25%, avg=10240.00, stdev=2896.31, samples=2 00:27:09.594 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:27:09.594 lat (msec) : 2=0.02%, 10=0.58%, 20=51.36%, 50=48.04% 00:27:09.594 cpu : usr=2.99%, sys=10.37%, ctx=519, majf=0, minf=1 00:27:09.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:09.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:09.594 issued rwts: total=2286,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:09.594 00:27:09.594 Run status group 0 (all jobs): 00:27:09.594 READ: bw=43.0MiB/s (45.1MB/s), 8985KiB/s-13.5MiB/s (9201kB/s-14.2MB/s), io=43.3MiB (45.4MB), run=1004-1007msec 00:27:09.594 WRITE: bw=47.1MiB/s (49.4MB/s), 9.93MiB/s-13.9MiB/s (10.4MB/s-14.6MB/s), io=47.4MiB (49.7MB), run=1004-1007msec 00:27:09.594 00:27:09.594 Disk stats (read/write): 00:27:09.594 nvme0n1: ios=2610/2959, merge=0/0, ticks=12379/12822, in_queue=25201, util=88.78% 00:27:09.594 nvme0n2: ios=2097/2283, merge=0/0, ticks=14118/14879, in_queue=28997, util=90.20% 00:27:09.594 nvme0n3: ios=2972/3072, merge=0/0, ticks=16634/15965, in_queue=32599, util=91.08% 00:27:09.594 nvme0n4: ios=2048/2315, merge=0/0, ticks=12440/12433, in_queue=24873, util=89.07% 00:27:09.594 15:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:27:09.594 15:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=107419 00:27:09.594 15:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:27:09.594 15:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:27:09.594 [global] 00:27:09.594 thread=1 00:27:09.594 invalidate=1 00:27:09.594 rw=read 00:27:09.594 time_based=1 00:27:09.594 runtime=10 00:27:09.594 ioengine=libaio 00:27:09.594 direct=1 00:27:09.594 bs=4096 00:27:09.594 iodepth=1 00:27:09.594 norandommap=1 00:27:09.594 numjobs=1 00:27:09.594 00:27:09.594 [job0] 00:27:09.594 filename=/dev/nvme0n1 00:27:09.594 [job1] 00:27:09.594 filename=/dev/nvme0n2 00:27:09.594 [job2] 00:27:09.594 filename=/dev/nvme0n3 00:27:09.594 [job3] 00:27:09.594 filename=/dev/nvme0n4 00:27:09.594 Could not set queue depth (nvme0n1) 00:27:09.594 Could not set queue depth (nvme0n2) 00:27:09.594 Could not set queue depth (nvme0n3) 00:27:09.594 Could not set queue depth (nvme0n4) 00:27:09.854 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:09.854 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:09.854 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:09.854 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:09.854 fio-3.35 00:27:09.854 Starting 4 threads 00:27:13.147 15:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:27:13.147 fio: pid=107466, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:27:13.147 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=28110848, buflen=4096 00:27:13.147 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:27:13.147 fio: pid=107465, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:27:13.147 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45142016, buflen=4096 00:27:13.147 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:13.147 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:27:13.147 fio: pid=107463, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:27:13.147 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=35459072, buflen=4096 00:27:13.147 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:13.147 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:27:13.407 fio: pid=107464, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:27:13.407 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=65613824, buflen=4096 00:27:13.407 00:27:13.407 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107463: Wed Nov 20 15:34:59 2024 00:27:13.407 read: IOPS=2650, BW=10.4MiB/s (10.9MB/s)(33.8MiB/3266msec) 00:27:13.407 slat (usec): min=10, max=12414, avg=32.04, stdev=212.08 00:27:13.407 clat (usec): min=121, max=5870, avg=342.85, stdev=100.85 00:27:13.407 lat (usec): min=137, max=12815, avg=374.89, stdev=236.44 00:27:13.407 clat percentiles (usec): 00:27:13.407 | 1.00th=[ 182], 5.00th=[ 206], 10.00th=[ 233], 20.00th=[ 269], 00:27:13.407 | 30.00th=[ 293], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 375], 00:27:13.407 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 453], 00:27:13.407 | 99.00th=[ 482], 99.50th=[ 490], 99.90th=[ 603], 99.95th=[ 1123], 00:27:13.407 | 99.99th=[ 5866] 00:27:13.407 bw ( KiB/s): min= 9120, max=12552, per=21.52%, avg=10469.33, stdev=1505.29, samples=6 00:27:13.407 iops : min= 2280, max= 3138, avg=2617.33, stdev=376.32, samples=6 00:27:13.407 lat (usec) : 250=13.64%, 500=86.08%, 750=0.18%, 1000=0.02% 00:27:13.407 lat (msec) : 2=0.05%, 10=0.01% 00:27:13.407 cpu : usr=1.26%, sys=6.16%, ctx=8668, majf=0, minf=1 00:27:13.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:13.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.407 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.407 issued rwts: total=8658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:13.407 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107464: Wed Nov 20 15:34:59 2024 00:27:13.407 read: IOPS=4578, BW=17.9MiB/s (18.8MB/s)(62.6MiB/3499msec) 00:27:13.407 slat (usec): min=7, max=12850, avg=13.08, stdev=197.86 00:27:13.407 clat (usec): min=114, max=2938, avg=204.63, stdev=61.56 00:27:13.407 lat (usec): min=123, max=13118, avg=217.71, stdev=207.92 00:27:13.407 clat percentiles (usec): 00:27:13.407 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 159], 00:27:13.407 | 30.00th=[ 172], 40.00th=[ 190], 50.00th=[ 206], 60.00th=[ 219], 00:27:13.407 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 273], 00:27:13.407 | 99.00th=[ 322], 99.50th=[ 371], 99.90th=[ 676], 99.95th=[ 1156], 00:27:13.407 | 99.99th=[ 2180] 00:27:13.407 bw ( KiB/s): min=16555, max=18728, per=37.09%, avg=18047.17, stdev=825.10, samples=6 00:27:13.407 iops : min= 4138, max= 4682, avg=4511.67, stdev=206.55, samples=6 00:27:13.407 lat (usec) : 250=86.44%, 500=13.38%, 750=0.08%, 1000=0.04% 00:27:13.407 lat (msec) : 2=0.04%, 4=0.02% 00:27:13.407 cpu : usr=0.66%, sys=3.60%, ctx=16027, majf=0, minf=2 00:27:13.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:13.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.407 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.407 issued rwts: total=16020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:13.407 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107465: Wed Nov 20 15:34:59 2024 00:27:13.407 read: IOPS=3594, BW=14.0MiB/s (14.7MB/s)(43.1MiB/3066msec) 00:27:13.407 slat (usec): min=8, max=12808, avg=18.72, stdev=172.54 00:27:13.407 clat (usec): min=112, max=6140, avg=257.61, stdev=117.55 00:27:13.407 lat (usec): min=142, max=13146, avg=276.33, stdev=212.73 00:27:13.407 clat percentiles (usec): 00:27:13.407 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 186], 00:27:13.407 | 30.00th=[ 196], 40.00th=[ 212], 50.00th=[ 235], 60.00th=[ 258], 00:27:13.407 | 70.00th=[ 285], 80.00th=[ 330], 90.00th=[ 375], 95.00th=[ 400], 00:27:13.407 | 99.00th=[ 453], 99.50th=[ 502], 99.90th=[ 1434], 99.95th=[ 1909], 00:27:13.407 | 99.99th=[ 4948] 00:27:13.407 bw ( KiB/s): min=10720, max=17912, per=31.60%, avg=15377.60, stdev=3384.13, samples=5 00:27:13.407 iops : min= 2680, max= 4478, avg=3844.40, stdev=846.03, samples=5 00:27:13.407 lat (usec) : 250=56.80%, 500=42.66%, 750=0.37%, 1000=0.01% 00:27:13.407 lat (msec) : 2=0.11%, 4=0.02%, 10=0.02% 00:27:13.407 cpu : usr=1.01%, sys=4.99%, ctx=11042, majf=0, minf=2 00:27:13.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:13.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.407 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.407 issued rwts: total=11022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:13.407 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107466: Wed Nov 20 15:34:59 2024 00:27:13.407 read: IOPS=2422, BW=9690KiB/s (9923kB/s)(26.8MiB/2833msec) 00:27:13.407 slat (usec): min=17, max=361, avg=32.94, stdev= 6.45 00:27:13.407 clat (usec): min=175, max=4896, avg=376.50, stdev=82.03 00:27:13.407 lat (usec): min=207, max=4930, avg=409.45, stdev=82.50 00:27:13.407 clat percentiles (usec): 00:27:13.407 | 1.00th=[ 215], 5.00th=[ 255], 10.00th=[ 289], 20.00th=[ 330], 00:27:13.407 | 30.00th=[ 355], 40.00th=[ 371], 50.00th=[ 388], 60.00th=[ 400], 00:27:13.407 | 70.00th=[ 412], 80.00th=[ 424], 90.00th=[ 445], 95.00th=[ 457], 00:27:13.407 | 99.00th=[ 486], 99.50th=[ 494], 99.90th=[ 627], 99.95th=[ 791], 00:27:13.407 | 99.99th=[ 4883] 00:27:13.407 bw ( KiB/s): min= 9104, max=10808, per=19.78%, avg=9624.00, stdev=739.10, samples=5 00:27:13.407 iops : min= 2276, max= 2702, avg=2406.00, stdev=184.78, samples=5 00:27:13.407 lat (usec) : 250=4.44%, 500=95.24%, 750=0.25%, 1000=0.03% 00:27:13.407 lat (msec) : 2=0.01%, 10=0.01% 00:27:13.407 cpu : usr=1.38%, sys=6.78%, ctx=6864, majf=0, minf=2 00:27:13.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:13.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.407 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.407 issued rwts: total=6864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:13.407 00:27:13.407 Run status group 0 (all jobs): 00:27:13.407 READ: bw=47.5MiB/s (49.8MB/s), 9690KiB/s-17.9MiB/s (9923kB/s-18.8MB/s), io=166MiB (174MB), run=2833-3499msec 00:27:13.407 00:27:13.407 Disk stats (read/write): 00:27:13.407 nvme0n1: ios=8192/0, merge=0/0, ticks=2895/0, in_queue=2895, util=95.13% 00:27:13.407 nvme0n2: ios=15192/0, merge=0/0, ticks=3180/0, in_queue=3180, util=95.10% 00:27:13.407 nvme0n3: ios=10576/0, merge=0/0, ticks=2719/0, in_queue=2719, util=96.54% 00:27:13.407 nvme0n4: ios=6329/0, merge=0/0, ticks=2435/0, in_queue=2435, util=96.47% 00:27:13.407 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:13.407 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:27:13.667 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:13.667 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:27:13.926 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:13.926 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:27:14.187 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:14.187 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:27:14.448 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:14.448 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:27:14.707 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:27:14.707 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 107419 00:27:14.707 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:14.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:27:14.708 nvmf hotplug test: fio failed as expected 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:27:14.708 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:14.967 rmmod nvme_tcp 00:27:14.967 rmmod nvme_fabrics 00:27:14.967 rmmod nvme_keyring 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 106938 ']' 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 106938 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 106938 ']' 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 106938 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.967 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106938 00:27:15.227 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:15.227 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:15.227 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106938' 00:27:15.227 killing process with pid 106938 00:27:15.227 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 106938 00:27:15.227 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 106938 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:15.487 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:15.747 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:15.747 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.747 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.747 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.747 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:27:15.747 ************************************ 00:27:15.747 END TEST nvmf_fio_target 00:27:15.747 ************************************ 00:27:15.747 00:27:15.747 real 0m19.620s 00:27:15.747 user 0m58.373s 00:27:15.747 sys 0m10.854s 00:27:15.747 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.747 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.747 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:27:15.747 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:15.747 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.747 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:15.747 ************************************ 00:27:15.747 START TEST nvmf_bdevio 00:27:15.747 ************************************ 00:27:15.747 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:27:15.747 * Looking for test storage... 00:27:16.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:16.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.009 --rc genhtml_branch_coverage=1 00:27:16.009 --rc genhtml_function_coverage=1 00:27:16.009 --rc genhtml_legend=1 00:27:16.009 --rc geninfo_all_blocks=1 00:27:16.009 --rc geninfo_unexecuted_blocks=1 00:27:16.009 00:27:16.009 ' 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:16.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.009 --rc genhtml_branch_coverage=1 00:27:16.009 --rc genhtml_function_coverage=1 00:27:16.009 --rc genhtml_legend=1 00:27:16.009 --rc geninfo_all_blocks=1 00:27:16.009 --rc geninfo_unexecuted_blocks=1 00:27:16.009 00:27:16.009 ' 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:16.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.009 --rc genhtml_branch_coverage=1 00:27:16.009 --rc genhtml_function_coverage=1 00:27:16.009 --rc genhtml_legend=1 00:27:16.009 --rc geninfo_all_blocks=1 00:27:16.009 --rc geninfo_unexecuted_blocks=1 00:27:16.009 00:27:16.009 ' 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:16.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.009 --rc genhtml_branch_coverage=1 00:27:16.009 --rc genhtml_function_coverage=1 00:27:16.009 --rc genhtml_legend=1 00:27:16.009 --rc geninfo_all_blocks=1 00:27:16.009 --rc geninfo_unexecuted_blocks=1 00:27:16.009 00:27:16.009 ' 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.009 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:16.010 Cannot find device "nvmf_init_br" 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:16.010 Cannot find device "nvmf_init_br2" 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:16.010 Cannot find device "nvmf_tgt_br" 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:16.010 Cannot find device "nvmf_tgt_br2" 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:16.010 Cannot find device "nvmf_init_br" 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:27:16.010 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:16.271 Cannot find device "nvmf_init_br2" 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:16.271 Cannot find device "nvmf_tgt_br" 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:16.271 Cannot find device "nvmf_tgt_br2" 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:16.271 Cannot find device "nvmf_br" 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:16.271 Cannot find device "nvmf_init_if" 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:16.271 Cannot find device "nvmf_init_if2" 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:16.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:16.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:16.271 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:16.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:16.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:27:16.532 00:27:16.532 --- 10.0.0.3 ping statistics --- 00:27:16.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.532 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:16.532 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:16.532 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:27:16.532 00:27:16.532 --- 10.0.0.4 ping statistics --- 00:27:16.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.532 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:16.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:27:16.532 00:27:16.532 --- 10.0.0.1 ping statistics --- 00:27:16.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.532 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:16.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:27:16.532 00:27:16.532 --- 10.0.0.2 ping statistics --- 00:27:16.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.532 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=107841 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 107841 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 107841 ']' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.532 15:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:16.532 [2024-11-20 15:35:02.956990] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:16.532 [2024-11-20 15:35:02.958040] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:27:16.532 [2024-11-20 15:35:02.958099] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.792 [2024-11-20 15:35:03.110514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.792 [2024-11-20 15:35:03.171011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.792 [2024-11-20 15:35:03.171058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.792 [2024-11-20 15:35:03.171068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.792 [2024-11-20 15:35:03.171076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.792 [2024-11-20 15:35:03.171083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.792 [2024-11-20 15:35:03.172471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:16.792 [2024-11-20 15:35:03.172917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:16.792 [2024-11-20 15:35:03.173306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:16.792 [2024-11-20 15:35:03.173345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.051 [2024-11-20 15:35:03.299754] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:17.051 [2024-11-20 15:35:03.299998] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:17.051 [2024-11-20 15:35:03.300355] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:17.051 [2024-11-20 15:35:03.300782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:17.051 [2024-11-20 15:35:03.301237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:17.622 [2024-11-20 15:35:03.906652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:17.622 Malloc0 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.622 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:17.622 [2024-11-20 15:35:04.022972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.622 { 00:27:17.622 "params": { 00:27:17.622 "name": "Nvme$subsystem", 00:27:17.622 "trtype": "$TEST_TRANSPORT", 00:27:17.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.622 "adrfam": "ipv4", 00:27:17.622 "trsvcid": "$NVMF_PORT", 00:27:17.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.622 "hdgst": ${hdgst:-false}, 00:27:17.622 "ddgst": ${ddgst:-false} 00:27:17.622 }, 00:27:17.622 "method": "bdev_nvme_attach_controller" 00:27:17.622 } 00:27:17.622 EOF 00:27:17.622 )") 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:27:17.622 15:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:17.622 "params": { 00:27:17.622 "name": "Nvme1", 00:27:17.622 "trtype": "tcp", 00:27:17.622 "traddr": "10.0.0.3", 00:27:17.622 "adrfam": "ipv4", 00:27:17.622 "trsvcid": "4420", 00:27:17.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:17.622 "hdgst": false, 00:27:17.622 "ddgst": false 00:27:17.622 }, 00:27:17.622 "method": "bdev_nvme_attach_controller" 00:27:17.622 }' 00:27:17.622 [2024-11-20 15:35:04.087763] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:27:17.622 [2024-11-20 15:35:04.087835] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107895 ] 00:27:17.882 [2024-11-20 15:35:04.233968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:17.882 [2024-11-20 15:35:04.296982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.882 [2024-11-20 15:35:04.297158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.882 [2024-11-20 15:35:04.297158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.141 I/O targets: 00:27:18.141 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:27:18.141 00:27:18.141 00:27:18.141 CUnit - A unit testing framework for C - Version 2.1-3 00:27:18.141 http://cunit.sourceforge.net/ 00:27:18.141 00:27:18.141 00:27:18.141 Suite: bdevio tests on: Nvme1n1 00:27:18.141 Test: blockdev write read block ...passed 00:27:18.141 Test: blockdev write zeroes read block ...passed 00:27:18.141 Test: blockdev write zeroes read no split ...passed 00:27:18.141 Test: blockdev write zeroes read split ...passed 00:27:18.141 Test: blockdev write zeroes read split partial ...passed 00:27:18.141 Test: blockdev reset ...[2024-11-20 15:35:04.620546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:18.141 [2024-11-20 15:35:04.620638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4ef50 (9): Bad file descriptor 00:27:18.400 [2024-11-20 15:35:04.624049] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:27:18.400 passed 00:27:18.400 Test: blockdev write read 8 blocks ...passed 00:27:18.400 Test: blockdev write read size > 128k ...passed 00:27:18.400 Test: blockdev write read invalid size ...passed 00:27:18.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:18.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:18.400 Test: blockdev write read max offset ...passed 00:27:18.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:18.400 Test: blockdev writev readv 8 blocks ...passed 00:27:18.400 Test: blockdev writev readv 30 x 1block ...passed 00:27:18.400 Test: blockdev writev readv block ...passed 00:27:18.400 Test: blockdev writev readv size > 128k ...passed 00:27:18.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:18.400 Test: blockdev comparev and writev ...[2024-11-20 15:35:04.798801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:18.400 [2024-11-20 15:35:04.798840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.400 [2024-11-20 15:35:04.798856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:18.400 [2024-11-20 15:35:04.798866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:18.400 [2024-11-20 15:35:04.799364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:18.400 [2024-11-20 15:35:04.799387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:18.400 [2024-11-20 15:35:04.799402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:18.400 [2024-11-20 15:35:04.799413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:18.400 [2024-11-20 15:35:04.799902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:18.400 [2024-11-20 15:35:04.799923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:18.400 [2024-11-20 15:35:04.799937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:18.400 [2024-11-20 15:35:04.799947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:18.400 [2024-11-20 15:35:04.800439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:18.400 [2024-11-20 15:35:04.800460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:18.400 [2024-11-20 15:35:04.800473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:18.400 [2024-11-20 15:35:04.800483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:18.400 passed 00:27:18.660 Test: blockdev nvme passthru rw ...passed 00:27:18.660 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:35:04.885110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:18.660 [2024-11-20 15:35:04.885146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:18.660 [2024-11-20 15:35:04.885283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:18.660 [2024-11-20 15:35:04.885296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:18.660 [2024-11-20 15:35:04.885432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:18.660 [2024-11-20 15:35:04.885442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:18.660 [2024-11-20 15:35:04.885551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:18.660 [2024-11-20 15:35:04.885562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:18.660 passed 00:27:18.660 Test: blockdev nvme admin passthru ...passed 00:27:18.660 Test: blockdev copy ...passed 00:27:18.660 00:27:18.660 Run Summary: Type Total Ran Passed Failed Inactive 00:27:18.660 suites 1 1 n/a 0 0 00:27:18.660 tests 23 23 23 0 0 00:27:18.660 asserts 152 152 152 0 n/a 00:27:18.660 00:27:18.660 Elapsed time = 0.885 seconds 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.920 rmmod nvme_tcp 00:27:18.920 rmmod nvme_fabrics 00:27:18.920 rmmod nvme_keyring 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 107841 ']' 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 107841 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 107841 ']' 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 107841 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107841 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:27:18.920 killing process with pid 107841 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107841' 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 107841 00:27:18.920 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 107841 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.489 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.749 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:27:19.749 00:27:19.749 real 0m3.878s 00:27:19.749 user 0m7.843s 00:27:19.749 sys 0m1.734s 00:27:19.749 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.749 ************************************ 00:27:19.749 15:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:19.749 END TEST nvmf_bdevio 00:27:19.749 ************************************ 00:27:19.749 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:19.749 ************************************ 00:27:19.749 END TEST nvmf_target_core_interrupt_mode 00:27:19.749 ************************************ 00:27:19.749 00:27:19.749 real 3m37.538s 00:27:19.749 user 9m0.888s 00:27:19.749 sys 1m42.165s 00:27:19.749 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.749 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:19.749 15:35:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:27:19.749 15:35:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:19.749 15:35:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.749 15:35:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.749 ************************************ 00:27:19.749 START TEST nvmf_interrupt 00:27:19.749 ************************************ 00:27:19.749 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:27:20.010 * Looking for test storage... 00:27:20.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:20.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.010 --rc genhtml_branch_coverage=1 00:27:20.010 --rc genhtml_function_coverage=1 00:27:20.010 --rc genhtml_legend=1 00:27:20.010 --rc geninfo_all_blocks=1 00:27:20.010 --rc geninfo_unexecuted_blocks=1 00:27:20.010 00:27:20.010 ' 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:20.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.010 --rc genhtml_branch_coverage=1 00:27:20.010 --rc genhtml_function_coverage=1 00:27:20.010 --rc genhtml_legend=1 00:27:20.010 --rc geninfo_all_blocks=1 00:27:20.010 --rc geninfo_unexecuted_blocks=1 00:27:20.010 00:27:20.010 ' 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:20.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.010 --rc genhtml_branch_coverage=1 00:27:20.010 --rc genhtml_function_coverage=1 00:27:20.010 --rc genhtml_legend=1 00:27:20.010 --rc geninfo_all_blocks=1 00:27:20.010 --rc geninfo_unexecuted_blocks=1 00:27:20.010 00:27:20.010 ' 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:20.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.010 --rc genhtml_branch_coverage=1 00:27:20.010 --rc genhtml_function_coverage=1 00:27:20.010 --rc genhtml_legend=1 00:27:20.010 --rc geninfo_all_blocks=1 00:27:20.010 --rc geninfo_unexecuted_blocks=1 00:27:20.010 00:27:20.010 ' 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:27:20.010 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:20.011 Cannot find device "nvmf_init_br" 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:20.011 Cannot find device "nvmf_init_br2" 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:20.011 Cannot find device "nvmf_tgt_br" 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:20.011 Cannot find device "nvmf_tgt_br2" 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:27:20.011 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:20.271 Cannot find device "nvmf_init_br" 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:20.271 Cannot find device "nvmf_init_br2" 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:20.271 Cannot find device "nvmf_tgt_br" 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:20.271 Cannot find device "nvmf_tgt_br2" 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:20.271 Cannot find device "nvmf_br" 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:20.271 Cannot find device "nvmf_init_if" 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:20.271 Cannot find device "nvmf_init_if2" 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:20.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:20.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:20.271 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:20.531 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:20.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:20.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:27:20.532 00:27:20.532 --- 10.0.0.3 ping statistics --- 00:27:20.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.532 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:20.532 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:20.532 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.117 ms 00:27:20.532 00:27:20.532 --- 10.0.0.4 ping statistics --- 00:27:20.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.532 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:20.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:27:20.532 00:27:20.532 --- 10.0.0.1 ping statistics --- 00:27:20.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.532 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:20.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:27:20.532 00:27:20.532 --- 10.0.0.2 ping statistics --- 00:27:20.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.532 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=108154 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 108154 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 108154 ']' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.532 15:35:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:20.792 [2024-11-20 15:35:07.048605] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:20.792 [2024-11-20 15:35:07.049480] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:27:20.792 [2024-11-20 15:35:07.049525] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.792 [2024-11-20 15:35:07.189761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:20.792 [2024-11-20 15:35:07.250622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.792 [2024-11-20 15:35:07.250683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.792 [2024-11-20 15:35:07.250694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.792 [2024-11-20 15:35:07.250702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.792 [2024-11-20 15:35:07.250709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.792 [2024-11-20 15:35:07.252000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.792 [2024-11-20 15:35:07.252001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.051 [2024-11-20 15:35:07.374390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:21.051 [2024-11-20 15:35:07.374693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:21.051 [2024-11-20 15:35:07.375111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:21.624 15:35:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.624 15:35:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:27:21.624 15:35:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:21.624 15:35:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:21.624 15:35:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:27:21.624 5000+0 records in 00:27:21.624 5000+0 records out 00:27:21.624 10240000 bytes (10 MB, 9.8 MiB) copied, 0.041113 s, 249 MB/s 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:21.624 AIO0 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.624 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:21.884 [2024-11-20 15:35:08.109357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:21.885 [2024-11-20 15:35:08.154539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 108154 0 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108154 0 idle 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108154 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108154 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.31 reactor_0' 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108154 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.31 reactor_0 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 108154 1 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108154 1 idle 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108154 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:21.885 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108159 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.00 reactor_1' 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108159 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.00 reactor_1 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=108230 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 108154 0 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 108154 0 busy 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108154 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:22.145 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108154 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.31 reactor_0' 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108154 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.31 reactor_0 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:27:22.404 15:35:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:27:23.342 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:27:23.342 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:23.342 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:23.342 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108154 root 20 0 64.2g 46720 33280 R 99.9 0.4 0:01.62 reactor_0' 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108154 root 20 0 64.2g 46720 33280 R 99.9 0.4 0:01.62 reactor_0 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 108154 1 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 108154 1 busy 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108154 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:23.601 15:35:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:27:23.860 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108159 root 20 0 64.2g 46720 33280 R 66.7 0.4 0:00.78 reactor_1' 00:27:23.860 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108159 root 20 0 64.2g 46720 33280 R 66.7 0.4 0:00.78 reactor_1 00:27:23.860 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:23.860 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:23.861 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:27:23.861 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:27:23.861 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:27:23.861 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:27:23.861 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:27:23.861 15:35:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:23.861 15:35:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 108230 00:27:33.845 Initializing NVMe Controllers 00:27:33.845 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.845 Controller IO queue size 256, less than required. 00:27:33.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.845 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:33.845 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:33.845 Initialization complete. Launching workers. 00:27:33.845 ======================================================== 00:27:33.845 Latency(us) 00:27:33.845 Device Information : IOPS MiB/s Average min max 00:27:33.845 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 8496.09 33.19 30161.88 14170.68 75167.27 00:27:33.845 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 7822.09 30.56 32754.84 12547.52 75473.07 00:27:33.845 ======================================================== 00:27:33.845 Total : 16318.18 63.74 31404.81 12547.52 75473.07 00:27:33.845 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 108154 0 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108154 0 idle 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108154 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108154 root 20 0 64.2g 46720 33280 S 0.0 0.4 0:12.92 reactor_0' 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108154 root 20 0 64.2g 46720 33280 S 0.0 0.4 0:12.92 reactor_0 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:33.845 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 108154 1 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108154 1 idle 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108154 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:33.846 15:35:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108159 root 20 0 64.2g 46720 33280 S 0.0 0.4 0:06.32 reactor_1' 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108159 root 20 0 64.2g 46720 33280 S 0.0 0.4 0:06.32 reactor_1 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:33.846 15:35:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 108154 0 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108154 0 idle 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108154 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108154 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:13.00 reactor_0' 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108154 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:13.00 reactor_0 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 108154 1 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108154 1 idle 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108154 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108154 -w 256 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108159 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:06.34 reactor_1' 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108159 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:06.34 reactor_1 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:35.225 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:35.226 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:35.226 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:35.226 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:35.226 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:35.226 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:35.226 15:35:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:35.226 15:35:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:35.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.486 rmmod nvme_tcp 00:27:35.486 rmmod nvme_fabrics 00:27:35.486 rmmod nvme_keyring 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 108154 ']' 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 108154 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 108154 ']' 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 108154 00:27:35.486 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:27:35.745 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.745 15:35:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108154 00:27:35.745 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.745 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.745 killing process with pid 108154 00:27:35.745 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108154' 00:27:35.745 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 108154 00:27:35.745 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 108154 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:36.034 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:27:36.294 00:27:36.294 real 0m16.495s 00:27:36.294 user 0m27.346s 00:27:36.294 sys 0m8.082s 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.294 15:35:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:36.294 ************************************ 00:27:36.294 END TEST nvmf_interrupt 00:27:36.294 ************************************ 00:27:36.294 ************************************ 00:27:36.294 END TEST nvmf_tcp 00:27:36.294 ************************************ 00:27:36.294 00:27:36.294 real 20m46.599s 00:27:36.294 user 51m51.930s 00:27:36.294 sys 6m39.125s 00:27:36.294 15:35:22 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.294 15:35:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.294 15:35:22 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:27:36.294 15:35:22 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:36.294 15:35:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:36.294 15:35:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.294 15:35:22 -- common/autotest_common.sh@10 -- # set +x 00:27:36.294 ************************************ 00:27:36.294 START TEST spdkcli_nvmf_tcp 00:27:36.294 ************************************ 00:27:36.295 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:36.554 * Looking for test storage... 00:27:36.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.554 --rc genhtml_branch_coverage=1 00:27:36.554 --rc genhtml_function_coverage=1 00:27:36.554 --rc genhtml_legend=1 00:27:36.554 --rc geninfo_all_blocks=1 00:27:36.554 --rc geninfo_unexecuted_blocks=1 00:27:36.554 00:27:36.554 ' 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.554 --rc genhtml_branch_coverage=1 00:27:36.554 --rc genhtml_function_coverage=1 00:27:36.554 --rc genhtml_legend=1 00:27:36.554 --rc geninfo_all_blocks=1 00:27:36.554 --rc geninfo_unexecuted_blocks=1 00:27:36.554 00:27:36.554 ' 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.554 --rc genhtml_branch_coverage=1 00:27:36.554 --rc genhtml_function_coverage=1 00:27:36.554 --rc genhtml_legend=1 00:27:36.554 --rc geninfo_all_blocks=1 00:27:36.554 --rc geninfo_unexecuted_blocks=1 00:27:36.554 00:27:36.554 ' 00:27:36.554 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.554 --rc genhtml_branch_coverage=1 00:27:36.554 --rc genhtml_function_coverage=1 00:27:36.554 --rc genhtml_legend=1 00:27:36.554 --rc geninfo_all_blocks=1 00:27:36.554 --rc geninfo_unexecuted_blocks=1 00:27:36.554 00:27:36.554 ' 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:36.554 15:35:23 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.813 15:35:23 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108572 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 108572 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 108572 ']' 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.814 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.814 [2024-11-20 15:35:23.112781] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:27:36.814 [2024-11-20 15:35:23.112851] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108572 ] 00:27:36.814 [2024-11-20 15:35:23.266224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:37.073 [2024-11-20 15:35:23.326797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.073 [2024-11-20 15:35:23.326797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.642 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.642 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:27:37.642 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:37.642 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:37.642 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.642 15:35:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:37.642 15:35:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:37.642 15:35:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:37.642 15:35:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.642 15:35:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.642 15:35:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:37.642 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:37.642 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:37.642 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:37.642 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:37.642 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:37.642 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:37.642 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:37.643 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:37.643 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:37.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:37.643 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:37.643 ' 00:27:40.933 [2024-11-20 15:35:26.826969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.869 [2024-11-20 15:35:28.181621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:44.403 [2024-11-20 15:35:30.694576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:46.936 [2024-11-20 15:35:32.849118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:48.312 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:48.312 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:48.312 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:48.312 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:48.312 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:48.312 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:48.312 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:48.312 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:48.312 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:48.312 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:48.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:48.312 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:48.312 15:35:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:48.312 15:35:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:48.312 15:35:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:48.312 15:35:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:48.312 15:35:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.312 15:35:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:48.312 15:35:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:48.312 15:35:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:48.880 15:35:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:48.880 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:48.880 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:48.880 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:48.880 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:48.880 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:48.880 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:48.880 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:48.880 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:48.880 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:48.880 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:48.880 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:48.880 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:48.880 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:48.880 ' 00:27:55.491 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:55.491 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:55.491 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:55.491 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:55.491 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:55.491 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:55.491 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:55.491 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:55.491 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:55.491 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:55.491 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:55.491 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:55.491 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:55.491 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 108572 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 108572 ']' 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 108572 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.491 15:35:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108572 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108572' 00:27:55.491 killing process with pid 108572 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 108572 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 108572 00:27:55.491 Process with pid 108572 is not found 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 108572 ']' 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 108572 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 108572 ']' 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 108572 00:27:55.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (108572) - No such process 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 108572 is not found' 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:55.491 00:27:55.491 real 0m18.533s 00:27:55.491 user 0m40.383s 00:27:55.491 sys 0m1.161s 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.491 15:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.491 ************************************ 00:27:55.491 END TEST spdkcli_nvmf_tcp 00:27:55.491 ************************************ 00:27:55.491 15:35:41 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:55.491 15:35:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.491 15:35:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.491 15:35:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.491 ************************************ 00:27:55.491 START TEST nvmf_identify_passthru 00:27:55.491 ************************************ 00:27:55.491 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:55.491 * Looking for test storage... 00:27:55.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:55.491 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:55.491 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:27:55.491 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:55.491 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.491 15:35:41 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:27:55.492 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.492 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.492 --rc genhtml_branch_coverage=1 00:27:55.492 --rc genhtml_function_coverage=1 00:27:55.492 --rc genhtml_legend=1 00:27:55.492 --rc geninfo_all_blocks=1 00:27:55.492 --rc geninfo_unexecuted_blocks=1 00:27:55.492 00:27:55.492 ' 00:27:55.492 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.492 --rc genhtml_branch_coverage=1 00:27:55.492 --rc genhtml_function_coverage=1 00:27:55.492 --rc genhtml_legend=1 00:27:55.492 --rc geninfo_all_blocks=1 00:27:55.492 --rc geninfo_unexecuted_blocks=1 00:27:55.492 00:27:55.492 ' 00:27:55.492 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.492 --rc genhtml_branch_coverage=1 00:27:55.492 --rc genhtml_function_coverage=1 00:27:55.492 --rc genhtml_legend=1 00:27:55.492 --rc geninfo_all_blocks=1 00:27:55.492 --rc geninfo_unexecuted_blocks=1 00:27:55.492 00:27:55.492 ' 00:27:55.492 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.492 --rc genhtml_branch_coverage=1 00:27:55.492 --rc genhtml_function_coverage=1 00:27:55.492 --rc genhtml_legend=1 00:27:55.492 --rc geninfo_all_blocks=1 00:27:55.492 --rc geninfo_unexecuted_blocks=1 00:27:55.492 00:27:55.492 ' 00:27:55.492 15:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:55.492 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.492 15:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:55.492 15:35:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.492 15:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.492 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:55.492 15:35:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:55.492 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:55.493 Cannot find device "nvmf_init_br" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:55.493 Cannot find device "nvmf_init_br2" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:55.493 Cannot find device "nvmf_tgt_br" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:55.493 Cannot find device "nvmf_tgt_br2" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:55.493 Cannot find device "nvmf_init_br" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:55.493 Cannot find device "nvmf_init_br2" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:55.493 Cannot find device "nvmf_tgt_br" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:55.493 Cannot find device "nvmf_tgt_br2" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:55.493 Cannot find device "nvmf_br" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:55.493 Cannot find device "nvmf_init_if" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:55.493 Cannot find device "nvmf_init_if2" 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:55.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:55.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:55.493 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:55.752 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:55.752 15:35:41 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:55.752 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:56.012 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:56.012 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:27:56.012 00:27:56.012 --- 10.0.0.3 ping statistics --- 00:27:56.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.012 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:56.012 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:56.012 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:27:56.012 00:27:56.012 --- 10.0.0.4 ping statistics --- 00:27:56.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.012 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:56.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:27:56.012 00:27:56.012 --- 10.0.0.1 ping statistics --- 00:27:56.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.012 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:56.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:27:56.012 00:27:56.012 --- 10.0.0.2 ping statistics --- 00:27:56.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.012 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:56.012 15:35:42 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:56.012 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.012 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:56.012 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:27:56.012 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:27:56.012 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:27:56.012 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:56.012 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:56.012 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:56.271 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:27:56.271 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:56.271 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:56.271 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:56.531 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:27:56.531 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.531 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.531 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=109102 00:27:56.531 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:56.531 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:56.531 15:35:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 109102 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 109102 ']' 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.531 15:35:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.531 [2024-11-20 15:35:42.950243] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:27:56.531 [2024-11-20 15:35:42.950320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.790 [2024-11-20 15:35:43.106840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.790 [2024-11-20 15:35:43.167385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.790 [2024-11-20 15:35:43.167631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.790 [2024-11-20 15:35:43.167848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.790 [2024-11-20 15:35:43.167896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.790 [2024-11-20 15:35:43.167927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.790 [2024-11-20 15:35:43.169388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.790 [2024-11-20 15:35:43.169654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.790 [2024-11-20 15:35:43.169483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.790 [2024-11-20 15:35:43.169652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:27:57.727 15:35:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 15:35:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 [2024-11-20 15:35:43.979928] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 15:35:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 15:35:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 [2024-11-20 15:35:43.994054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 Nvme0n1 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 [2024-11-20 15:35:44.172779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 [ 00:27:57.727 { 00:27:57.727 "allow_any_host": true, 00:27:57.727 "hosts": [], 00:27:57.727 "listen_addresses": [], 00:27:57.727 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:57.727 "subtype": "Discovery" 00:27:57.727 }, 00:27:57.727 { 00:27:57.727 "allow_any_host": true, 00:27:57.727 "hosts": [], 00:27:57.727 "listen_addresses": [ 00:27:57.727 { 00:27:57.727 "adrfam": "IPv4", 00:27:57.728 "traddr": "10.0.0.3", 00:27:57.728 "trsvcid": "4420", 00:27:57.728 "trtype": "TCP" 00:27:57.728 } 00:27:57.728 ], 00:27:57.728 "max_cntlid": 65519, 00:27:57.728 "max_namespaces": 1, 00:27:57.728 "min_cntlid": 1, 00:27:57.728 "model_number": "SPDK bdev Controller", 00:27:57.728 "namespaces": [ 00:27:57.728 { 00:27:57.728 "bdev_name": "Nvme0n1", 00:27:57.728 "name": "Nvme0n1", 00:27:57.728 "nguid": "04110005DE0F42508AFF050134DB0F17", 00:27:57.728 "nsid": 1, 00:27:57.728 "uuid": "04110005-de0f-4250-8aff-050134db0f17" 00:27:57.728 } 00:27:57.728 ], 00:27:57.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.728 "serial_number": "SPDK00000000000001", 00:27:57.728 "subtype": "NVMe" 00:27:57.728 } 00:27:57.728 ] 00:27:57.728 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.987 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:57.987 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:57.987 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:57.987 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:27:57.987 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:57.987 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:57.987 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:58.246 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:27:58.246 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:27:58.246 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:27:58.246 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.246 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.246 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:58.246 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.246 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:58.246 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:58.246 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:58.246 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:58.505 rmmod nvme_tcp 00:27:58.505 rmmod nvme_fabrics 00:27:58.505 rmmod nvme_keyring 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 109102 ']' 00:27:58.505 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 109102 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 109102 ']' 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 109102 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109102 00:27:58.505 killing process with pid 109102 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109102' 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 109102 00:27:58.505 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 109102 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:58.764 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:59.024 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:59.024 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:59.024 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:59.024 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:59.024 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:59.024 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.024 15:35:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:59.024 15:35:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.024 15:35:45 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:27:59.024 00:27:59.024 real 0m4.069s 00:27:59.024 user 0m8.360s 00:27:59.024 sys 0m1.395s 00:27:59.024 15:35:45 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.024 15:35:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:59.024 ************************************ 00:27:59.024 END TEST nvmf_identify_passthru 00:27:59.024 ************************************ 00:27:59.284 15:35:45 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:59.284 15:35:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:59.284 15:35:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.284 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:27:59.284 ************************************ 00:27:59.284 START TEST nvmf_dif 00:27:59.284 ************************************ 00:27:59.284 15:35:45 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:59.284 * Looking for test storage... 00:27:59.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:59.284 15:35:45 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:59.284 15:35:45 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:27:59.284 15:35:45 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:59.284 15:35:45 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.284 15:35:45 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:59.545 15:35:45 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.545 15:35:45 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.545 15:35:45 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.545 15:35:45 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:59.545 15:35:45 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.545 15:35:45 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.545 --rc genhtml_branch_coverage=1 00:27:59.545 --rc genhtml_function_coverage=1 00:27:59.545 --rc genhtml_legend=1 00:27:59.545 --rc geninfo_all_blocks=1 00:27:59.545 --rc geninfo_unexecuted_blocks=1 00:27:59.545 00:27:59.545 ' 00:27:59.545 15:35:45 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.545 --rc genhtml_branch_coverage=1 00:27:59.545 --rc genhtml_function_coverage=1 00:27:59.545 --rc genhtml_legend=1 00:27:59.545 --rc geninfo_all_blocks=1 00:27:59.545 --rc geninfo_unexecuted_blocks=1 00:27:59.545 00:27:59.545 ' 00:27:59.545 15:35:45 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.545 --rc genhtml_branch_coverage=1 00:27:59.545 --rc genhtml_function_coverage=1 00:27:59.545 --rc genhtml_legend=1 00:27:59.545 --rc geninfo_all_blocks=1 00:27:59.545 --rc geninfo_unexecuted_blocks=1 00:27:59.545 00:27:59.545 ' 00:27:59.545 15:35:45 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.545 --rc genhtml_branch_coverage=1 00:27:59.545 --rc genhtml_function_coverage=1 00:27:59.545 --rc genhtml_legend=1 00:27:59.545 --rc geninfo_all_blocks=1 00:27:59.545 --rc geninfo_unexecuted_blocks=1 00:27:59.545 00:27:59.545 ' 00:27:59.545 15:35:45 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:59.545 15:35:45 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:59.545 15:35:45 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.545 15:35:45 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.545 15:35:45 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.545 15:35:45 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.545 15:35:45 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.545 15:35:45 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.545 15:35:45 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:59.545 15:35:45 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:59.545 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:59.545 15:35:45 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:59.545 15:35:45 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:59.545 15:35:45 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:59.545 15:35:45 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:59.545 15:35:45 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.545 15:35:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:59.545 15:35:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:59.545 15:35:45 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:59.546 Cannot find device "nvmf_init_br" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:59.546 Cannot find device "nvmf_init_br2" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:59.546 Cannot find device "nvmf_tgt_br" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:59.546 Cannot find device "nvmf_tgt_br2" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:59.546 Cannot find device "nvmf_init_br" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:59.546 Cannot find device "nvmf_init_br2" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:59.546 Cannot find device "nvmf_tgt_br" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:59.546 Cannot find device "nvmf_tgt_br2" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:59.546 Cannot find device "nvmf_br" 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:59.546 15:35:45 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:59.546 Cannot find device "nvmf_init_if" 00:27:59.546 15:35:46 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:59.546 15:35:46 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:59.805 Cannot find device "nvmf_init_if2" 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:59.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:59.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:59.805 15:35:46 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:00.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:00.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:28:00.065 00:28:00.065 --- 10.0.0.3 ping statistics --- 00:28:00.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.065 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:00.065 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:00.065 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:28:00.065 00:28:00.065 --- 10.0.0.4 ping statistics --- 00:28:00.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.065 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:00.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:28:00.065 00:28:00.065 --- 10.0.0.1 ping statistics --- 00:28:00.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.065 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:00.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:28:00.065 00:28:00.065 --- 10.0.0.2 ping statistics --- 00:28:00.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.065 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:28:00.065 15:35:46 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:00.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:00.635 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:00.635 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:00.635 15:35:46 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:00.635 15:35:46 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.635 15:35:46 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.635 15:35:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=109505 00:28:00.635 15:35:46 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 109505 00:28:00.635 15:35:46 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 109505 ']' 00:28:00.635 15:35:46 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.635 15:35:46 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.635 15:35:46 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.635 15:35:46 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.635 15:35:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:00.635 [2024-11-20 15:35:47.053036] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:28:00.635 [2024-11-20 15:35:47.053090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.895 [2024-11-20 15:35:47.206502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.895 [2024-11-20 15:35:47.265300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.895 [2024-11-20 15:35:47.265342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.895 [2024-11-20 15:35:47.265352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.895 [2024-11-20 15:35:47.265360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.895 [2024-11-20 15:35:47.265367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.895 [2024-11-20 15:35:47.265759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.463 15:35:47 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.463 15:35:47 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:28:01.463 15:35:47 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.463 15:35:47 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.463 15:35:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.723 15:35:47 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.723 15:35:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:01.723 15:35:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:01.723 15:35:47 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.723 15:35:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.723 [2024-11-20 15:35:47.981858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.723 15:35:47 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.723 15:35:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:01.723 15:35:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:01.723 15:35:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:01.723 15:35:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.723 ************************************ 00:28:01.723 START TEST fio_dif_1_default 00:28:01.723 ************************************ 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:01.723 bdev_null0 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:01.723 [2024-11-20 15:35:48.045938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.723 { 00:28:01.723 "params": { 00:28:01.723 "name": "Nvme$subsystem", 00:28:01.723 "trtype": "$TEST_TRANSPORT", 00:28:01.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.723 "adrfam": "ipv4", 00:28:01.723 "trsvcid": "$NVMF_PORT", 00:28:01.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.723 "hdgst": ${hdgst:-false}, 00:28:01.723 "ddgst": ${ddgst:-false} 00:28:01.723 }, 00:28:01.723 "method": "bdev_nvme_attach_controller" 00:28:01.723 } 00:28:01.723 EOF 00:28:01.723 )") 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:01.723 "params": { 00:28:01.723 "name": "Nvme0", 00:28:01.723 "trtype": "tcp", 00:28:01.723 "traddr": "10.0.0.3", 00:28:01.723 "adrfam": "ipv4", 00:28:01.723 "trsvcid": "4420", 00:28:01.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:01.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:01.723 "hdgst": false, 00:28:01.723 "ddgst": false 00:28:01.723 }, 00:28:01.723 "method": "bdev_nvme_attach_controller" 00:28:01.723 }' 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:01.723 15:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.983 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:01.983 fio-3.35 00:28:01.983 Starting 1 thread 00:28:14.206 00:28:14.206 filename0: (groupid=0, jobs=1): err= 0: pid=109589: Wed Nov 20 15:35:58 2024 00:28:14.206 read: IOPS=425, BW=1700KiB/s (1741kB/s)(16.6MiB/10013msec) 00:28:14.206 slat (nsec): min=5303, max=38556, avg=5727.12, stdev=1483.74 00:28:14.206 clat (usec): min=312, max=41631, avg=9394.42, stdev=16847.79 00:28:14.206 lat (usec): min=317, max=41637, avg=9400.15, stdev=16847.74 00:28:14.206 clat percentiles (usec): 00:28:14.206 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 330], 20.00th=[ 338], 00:28:14.206 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:28:14.206 | 70.00th=[ 367], 80.00th=[40633], 90.00th=[40633], 95.00th=[41157], 00:28:14.206 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:28:14.206 | 99.99th=[41681] 00:28:14.206 bw ( KiB/s): min= 1152, max= 2144, per=99.99%, avg=1700.80, stdev=259.40, samples=20 00:28:14.206 iops : min= 288, max= 536, avg=425.20, stdev=64.85, samples=20 00:28:14.206 lat (usec) : 500=77.11%, 750=0.42% 00:28:14.206 lat (msec) : 4=0.09%, 50=22.37% 00:28:14.206 cpu : usr=83.33%, sys=16.09%, ctx=103, majf=0, minf=9 00:28:14.206 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:14.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.206 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.206 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:14.206 00:28:14.206 Run status group 0 (all jobs): 00:28:14.206 READ: bw=1700KiB/s (1741kB/s), 1700KiB/s-1700KiB/s (1741kB/s-1741kB/s), io=16.6MiB (17.4MB), run=10013-10013msec 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 ************************************ 00:28:14.206 END TEST fio_dif_1_default 00:28:14.206 ************************************ 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 00:28:14.206 real 0m11.169s 00:28:14.206 user 0m9.100s 00:28:14.206 sys 0m1.937s 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 15:35:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:14.206 15:35:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:14.206 15:35:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 ************************************ 00:28:14.206 START TEST fio_dif_1_multi_subsystems 00:28:14.206 ************************************ 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 bdev_null0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 [2024-11-20 15:35:59.288869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 bdev_null1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.206 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.206 { 00:28:14.206 "params": { 00:28:14.206 "name": "Nvme$subsystem", 00:28:14.206 "trtype": "$TEST_TRANSPORT", 00:28:14.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.206 "adrfam": "ipv4", 00:28:14.206 "trsvcid": "$NVMF_PORT", 00:28:14.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.206 "hdgst": ${hdgst:-false}, 00:28:14.207 "ddgst": ${ddgst:-false} 00:28:14.207 }, 00:28:14.207 "method": "bdev_nvme_attach_controller" 00:28:14.207 } 00:28:14.207 EOF 00:28:14.207 )") 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.207 { 00:28:14.207 "params": { 00:28:14.207 "name": "Nvme$subsystem", 00:28:14.207 "trtype": "$TEST_TRANSPORT", 00:28:14.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.207 "adrfam": "ipv4", 00:28:14.207 "trsvcid": "$NVMF_PORT", 00:28:14.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.207 "hdgst": ${hdgst:-false}, 00:28:14.207 "ddgst": ${ddgst:-false} 00:28:14.207 }, 00:28:14.207 "method": "bdev_nvme_attach_controller" 00:28:14.207 } 00:28:14.207 EOF 00:28:14.207 )") 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:14.207 "params": { 00:28:14.207 "name": "Nvme0", 00:28:14.207 "trtype": "tcp", 00:28:14.207 "traddr": "10.0.0.3", 00:28:14.207 "adrfam": "ipv4", 00:28:14.207 "trsvcid": "4420", 00:28:14.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:14.207 "hdgst": false, 00:28:14.207 "ddgst": false 00:28:14.207 }, 00:28:14.207 "method": "bdev_nvme_attach_controller" 00:28:14.207 },{ 00:28:14.207 "params": { 00:28:14.207 "name": "Nvme1", 00:28:14.207 "trtype": "tcp", 00:28:14.207 "traddr": "10.0.0.3", 00:28:14.207 "adrfam": "ipv4", 00:28:14.207 "trsvcid": "4420", 00:28:14.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:14.207 "hdgst": false, 00:28:14.207 "ddgst": false 00:28:14.207 }, 00:28:14.207 "method": "bdev_nvme_attach_controller" 00:28:14.207 }' 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:14.207 15:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.207 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:14.207 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:14.207 fio-3.35 00:28:14.207 Starting 2 threads 00:28:24.192 00:28:24.192 filename0: (groupid=0, jobs=1): err= 0: pid=109755: Wed Nov 20 15:36:10 2024 00:28:24.192 read: IOPS=388, BW=1556KiB/s (1593kB/s)(15.2MiB/10026msec) 00:28:24.192 slat (nsec): min=5357, max=57744, avg=8135.24, stdev=5908.15 00:28:24.192 clat (usec): min=311, max=42416, avg=10257.76, stdev=17375.29 00:28:24.192 lat (usec): min=317, max=42422, avg=10265.90, stdev=17375.08 00:28:24.192 clat percentiles (usec): 00:28:24.192 | 1.00th=[ 330], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 371], 00:28:24.192 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 408], 00:28:24.192 | 70.00th=[ 627], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:28:24.192 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:28:24.192 | 99.99th=[42206] 00:28:24.192 bw ( KiB/s): min= 736, max= 3360, per=50.12%, avg=1558.40, stdev=864.97, samples=20 00:28:24.192 iops : min= 184, max= 840, avg=389.60, stdev=216.24, samples=20 00:28:24.192 lat (usec) : 500=66.90%, 750=8.10%, 1000=0.59% 00:28:24.192 lat (msec) : 2=0.10%, 50=24.31% 00:28:24.192 cpu : usr=94.74%, sys=4.66%, ctx=244, majf=0, minf=0 00:28:24.192 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.192 issued rwts: total=3900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.192 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:24.192 filename1: (groupid=0, jobs=1): err= 0: pid=109756: Wed Nov 20 15:36:10 2024 00:28:24.192 read: IOPS=388, BW=1553KiB/s (1591kB/s)(15.2MiB/10022msec) 00:28:24.192 slat (nsec): min=5544, max=55897, avg=8066.98, stdev=5331.39 00:28:24.192 clat (usec): min=321, max=41525, avg=10276.03, stdev=17387.48 00:28:24.192 lat (usec): min=327, max=41547, avg=10284.09, stdev=17387.45 00:28:24.192 clat percentiles (usec): 00:28:24.192 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 371], 00:28:24.192 | 30.00th=[ 379], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 404], 00:28:24.192 | 70.00th=[ 635], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:28:24.192 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:28:24.192 | 99.99th=[41681] 00:28:24.192 bw ( KiB/s): min= 576, max= 2656, per=50.02%, avg=1555.20, stdev=698.81, samples=20 00:28:24.192 iops : min= 144, max= 664, avg=388.80, stdev=174.70, samples=20 00:28:24.192 lat (usec) : 500=67.32%, 750=7.73%, 1000=0.49% 00:28:24.192 lat (msec) : 2=0.10%, 50=24.36% 00:28:24.192 cpu : usr=94.76%, sys=4.75%, ctx=10, majf=0, minf=9 00:28:24.192 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.192 issued rwts: total=3892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.192 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:24.192 00:28:24.192 Run status group 0 (all jobs): 00:28:24.192 READ: bw=3109KiB/s (3183kB/s), 1553KiB/s-1556KiB/s (1591kB/s-1593kB/s), io=30.4MiB (31.9MB), run=10022-10026msec 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.192 00:28:24.192 real 0m11.413s 00:28:24.192 user 0m19.935s 00:28:24.192 sys 0m1.321s 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.192 ************************************ 00:28:24.192 15:36:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 END TEST fio_dif_1_multi_subsystems 00:28:24.192 ************************************ 00:28:24.497 15:36:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:24.497 15:36:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:24.497 15:36:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.497 15:36:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.498 ************************************ 00:28:24.498 START TEST fio_dif_rand_params 00:28:24.498 ************************************ 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.498 bdev_null0 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.498 [2024-11-20 15:36:10.771646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.498 { 00:28:24.498 "params": { 00:28:24.498 "name": "Nvme$subsystem", 00:28:24.498 "trtype": "$TEST_TRANSPORT", 00:28:24.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.498 "adrfam": "ipv4", 00:28:24.498 "trsvcid": "$NVMF_PORT", 00:28:24.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.498 "hdgst": ${hdgst:-false}, 00:28:24.498 "ddgst": ${ddgst:-false} 00:28:24.498 }, 00:28:24.498 "method": "bdev_nvme_attach_controller" 00:28:24.498 } 00:28:24.498 EOF 00:28:24.498 )") 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:24.498 "params": { 00:28:24.498 "name": "Nvme0", 00:28:24.498 "trtype": "tcp", 00:28:24.498 "traddr": "10.0.0.3", 00:28:24.498 "adrfam": "ipv4", 00:28:24.498 "trsvcid": "4420", 00:28:24.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.498 "hdgst": false, 00:28:24.498 "ddgst": false 00:28:24.498 }, 00:28:24.498 "method": "bdev_nvme_attach_controller" 00:28:24.498 }' 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:24.498 15:36:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.780 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:24.780 ... 00:28:24.780 fio-3.35 00:28:24.780 Starting 3 threads 00:28:31.365 00:28:31.365 filename0: (groupid=0, jobs=1): err= 0: pid=109911: Wed Nov 20 15:36:16 2024 00:28:31.365 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(151MiB/5004msec) 00:28:31.365 slat (nsec): min=5334, max=63099, avg=12348.72, stdev=8282.08 00:28:31.365 clat (usec): min=3512, max=52526, avg=12397.85, stdev=12353.40 00:28:31.365 lat (usec): min=3518, max=52532, avg=12410.20, stdev=12354.15 00:28:31.365 clat percentiles (usec): 00:28:31.365 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6390], 00:28:31.365 | 30.00th=[ 6652], 40.00th=[ 7701], 50.00th=[ 9372], 60.00th=[ 9896], 00:28:31.365 | 70.00th=[10290], 80.00th=[10683], 90.00th=[12256], 95.00th=[49546], 00:28:31.365 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:28:31.365 | 99.99th=[52691] 00:28:31.365 bw ( KiB/s): min=24064, max=35584, per=26.43%, avg=30008.89, stdev=3820.28, samples=9 00:28:31.365 iops : min= 188, max= 278, avg=234.44, stdev=29.85, samples=9 00:28:31.365 lat (msec) : 4=0.50%, 10=63.61%, 20=25.97%, 50=6.37%, 100=3.56% 00:28:31.365 cpu : usr=93.06%, sys=5.52%, ctx=12, majf=0, minf=0 00:28:31.365 IO depths : 1=3.6%, 2=96.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.365 issued rwts: total=1209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.365 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.365 filename0: (groupid=0, jobs=1): err= 0: pid=109912: Wed Nov 20 15:36:16 2024 00:28:31.365 read: IOPS=394, BW=49.3MiB/s (51.7MB/s)(247MiB/5004msec) 00:28:31.365 slat (nsec): min=5304, max=48438, avg=10312.58, stdev=7785.91 00:28:31.365 clat (usec): min=3030, max=46648, avg=7579.97, stdev=3308.28 00:28:31.365 lat (usec): min=3037, max=46655, avg=7590.28, stdev=3309.83 00:28:31.365 clat percentiles (usec): 00:28:31.365 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3589], 00:28:31.365 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7308], 60.00th=[ 7701], 00:28:31.365 | 70.00th=[ 9241], 80.00th=[10945], 90.00th=[11600], 95.00th=[11994], 00:28:31.365 | 99.00th=[12649], 99.50th=[12911], 99.90th=[45876], 99.95th=[46400], 00:28:31.365 | 99.99th=[46400] 00:28:31.365 bw ( KiB/s): min=42240, max=57600, per=44.54%, avg=50574.22, stdev=5042.80, samples=9 00:28:31.365 iops : min= 330, max= 450, avg=395.11, stdev=39.40, samples=9 00:28:31.365 lat (msec) : 4=24.82%, 10=47.87%, 20=27.15%, 50=0.15% 00:28:31.365 cpu : usr=91.27%, sys=6.74%, ctx=4, majf=0, minf=0 00:28:31.365 IO depths : 1=28.4%, 2=71.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.365 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.365 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.365 filename0: (groupid=0, jobs=1): err= 0: pid=109913: Wed Nov 20 15:36:16 2024 00:28:31.365 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(157MiB/5005msec) 00:28:31.365 slat (nsec): min=5614, max=43409, avg=14177.74, stdev=7776.78 00:28:31.365 clat (usec): min=4025, max=51858, avg=11917.92, stdev=12568.16 00:28:31.365 lat (usec): min=4044, max=51883, avg=11932.10, stdev=12568.12 00:28:31.365 clat percentiles (usec): 00:28:31.365 | 1.00th=[ 4817], 5.00th=[ 5342], 10.00th=[ 5997], 20.00th=[ 6325], 00:28:31.365 | 30.00th=[ 6587], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8717], 00:28:31.365 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[46400], 95.00th=[48497], 00:28:31.365 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:28:31.365 | 99.99th=[51643] 00:28:31.365 bw ( KiB/s): min=25344, max=46080, per=29.13%, avg=33080.89, stdev=5705.38, samples=9 00:28:31.365 iops : min= 198, max= 360, avg=258.44, stdev=44.57, samples=9 00:28:31.365 lat (msec) : 10=88.94%, 20=0.56%, 50=9.31%, 100=1.19% 00:28:31.365 cpu : usr=92.95%, sys=6.10%, ctx=7, majf=0, minf=9 00:28:31.365 IO depths : 1=6.1%, 2=93.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.365 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.365 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.365 00:28:31.365 Run status group 0 (all jobs): 00:28:31.365 READ: bw=111MiB/s (116MB/s), 30.2MiB/s-49.3MiB/s (31.7MB/s-51.7MB/s), io=555MiB (582MB), run=5004-5005msec 00:28:31.365 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:31.365 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 bdev_null0 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 [2024-11-20 15:36:16.948841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 bdev_null1 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 bdev_null2 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.366 { 00:28:31.366 "params": { 00:28:31.366 "name": "Nvme$subsystem", 00:28:31.366 "trtype": "$TEST_TRANSPORT", 00:28:31.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.366 "adrfam": "ipv4", 00:28:31.366 "trsvcid": "$NVMF_PORT", 00:28:31.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.366 "hdgst": ${hdgst:-false}, 00:28:31.366 "ddgst": ${ddgst:-false} 00:28:31.366 }, 00:28:31.366 "method": "bdev_nvme_attach_controller" 00:28:31.366 } 00:28:31.366 EOF 00:28:31.366 )") 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:31.366 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.367 { 00:28:31.367 "params": { 00:28:31.367 "name": "Nvme$subsystem", 00:28:31.367 "trtype": "$TEST_TRANSPORT", 00:28:31.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.367 "adrfam": "ipv4", 00:28:31.367 "trsvcid": "$NVMF_PORT", 00:28:31.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.367 "hdgst": ${hdgst:-false}, 00:28:31.367 "ddgst": ${ddgst:-false} 00:28:31.367 }, 00:28:31.367 "method": "bdev_nvme_attach_controller" 00:28:31.367 } 00:28:31.367 EOF 00:28:31.367 )") 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.367 { 00:28:31.367 "params": { 00:28:31.367 "name": "Nvme$subsystem", 00:28:31.367 "trtype": "$TEST_TRANSPORT", 00:28:31.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.367 "adrfam": "ipv4", 00:28:31.367 "trsvcid": "$NVMF_PORT", 00:28:31.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.367 "hdgst": ${hdgst:-false}, 00:28:31.367 "ddgst": ${ddgst:-false} 00:28:31.367 }, 00:28:31.367 "method": "bdev_nvme_attach_controller" 00:28:31.367 } 00:28:31.367 EOF 00:28:31.367 )") 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:31.367 "params": { 00:28:31.367 "name": "Nvme0", 00:28:31.367 "trtype": "tcp", 00:28:31.367 "traddr": "10.0.0.3", 00:28:31.367 "adrfam": "ipv4", 00:28:31.367 "trsvcid": "4420", 00:28:31.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.367 "hdgst": false, 00:28:31.367 "ddgst": false 00:28:31.367 }, 00:28:31.367 "method": "bdev_nvme_attach_controller" 00:28:31.367 },{ 00:28:31.367 "params": { 00:28:31.367 "name": "Nvme1", 00:28:31.367 "trtype": "tcp", 00:28:31.367 "traddr": "10.0.0.3", 00:28:31.367 "adrfam": "ipv4", 00:28:31.367 "trsvcid": "4420", 00:28:31.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:31.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:31.367 "hdgst": false, 00:28:31.367 "ddgst": false 00:28:31.367 }, 00:28:31.367 "method": "bdev_nvme_attach_controller" 00:28:31.367 },{ 00:28:31.367 "params": { 00:28:31.367 "name": "Nvme2", 00:28:31.367 "trtype": "tcp", 00:28:31.367 "traddr": "10.0.0.3", 00:28:31.367 "adrfam": "ipv4", 00:28:31.367 "trsvcid": "4420", 00:28:31.367 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:31.367 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:31.367 "hdgst": false, 00:28:31.367 "ddgst": false 00:28:31.367 }, 00:28:31.367 "method": "bdev_nvme_attach_controller" 00:28:31.367 }' 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:31.367 15:36:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.367 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:31.367 ... 00:28:31.367 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:31.367 ... 00:28:31.367 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:31.367 ... 00:28:31.367 fio-3.35 00:28:31.367 Starting 24 threads 00:28:43.576 00:28:43.576 filename0: (groupid=0, jobs=1): err= 0: pid=110014: Wed Nov 20 15:36:28 2024 00:28:43.577 read: IOPS=303, BW=1212KiB/s (1242kB/s)(11.9MiB/10039msec) 00:28:43.577 slat (usec): min=5, max=10034, avg=23.14, stdev=310.45 00:28:43.577 clat (msec): min=7, max=107, avg=52.55, stdev=17.39 00:28:43.577 lat (msec): min=7, max=107, avg=52.58, stdev=17.39 00:28:43.577 clat percentiles (msec): 00:28:43.577 | 1.00th=[ 10], 5.00th=[ 25], 10.00th=[ 35], 20.00th=[ 37], 00:28:43.577 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 58], 00:28:43.577 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 84], 00:28:43.577 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 108], 99.95th=[ 108], 00:28:43.577 | 99.99th=[ 108] 00:28:43.577 bw ( KiB/s): min= 800, max= 1792, per=4.24%, avg=1210.80, stdev=217.44, samples=20 00:28:43.577 iops : min= 200, max= 448, avg=302.70, stdev=54.36, samples=20 00:28:43.577 lat (msec) : 10=1.05%, 20=1.58%, 50=51.99%, 100=44.53%, 250=0.85% 00:28:43.577 cpu : usr=32.74%, sys=0.59%, ctx=869, majf=0, minf=9 00:28:43.577 IO depths : 1=1.3%, 2=2.9%, 4=10.5%, 8=72.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:28:43.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 issued rwts: total=3043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.577 filename0: (groupid=0, jobs=1): err= 0: pid=110015: Wed Nov 20 15:36:28 2024 00:28:43.577 read: IOPS=294, BW=1177KiB/s (1206kB/s)(11.5MiB/10008msec) 00:28:43.577 slat (usec): min=5, max=8034, avg=18.67, stdev=195.37 00:28:43.577 clat (msec): min=9, max=113, avg=54.19, stdev=14.96 00:28:43.577 lat (msec): min=9, max=113, avg=54.21, stdev=14.96 00:28:43.577 clat percentiles (msec): 00:28:43.577 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 44], 00:28:43.577 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 53], 60.00th=[ 56], 00:28:43.577 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 74], 95.00th=[ 82], 00:28:43.577 | 99.00th=[ 94], 99.50th=[ 102], 99.90th=[ 114], 99.95th=[ 114], 00:28:43.577 | 99.99th=[ 114] 00:28:43.577 bw ( KiB/s): min= 840, max= 1584, per=4.08%, avg=1164.21, stdev=159.94, samples=19 00:28:43.577 iops : min= 210, max= 396, avg=291.05, stdev=39.99, samples=19 00:28:43.577 lat (msec) : 10=0.37%, 20=0.71%, 50=42.50%, 100=55.74%, 250=0.68% 00:28:43.577 cpu : usr=40.82%, sys=1.15%, ctx=1227, majf=0, minf=9 00:28:43.577 IO depths : 1=2.8%, 2=6.1%, 4=16.5%, 8=64.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:28:43.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 issued rwts: total=2946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.577 filename0: (groupid=0, jobs=1): err= 0: pid=110016: Wed Nov 20 15:36:28 2024 00:28:43.577 read: IOPS=271, BW=1087KiB/s (1113kB/s)(10.6MiB/10003msec) 00:28:43.577 slat (usec): min=2, max=8049, avg=29.89, stdev=376.40 00:28:43.577 clat (msec): min=9, max=108, avg=58.68, stdev=14.50 00:28:43.577 lat (msec): min=9, max=108, avg=58.71, stdev=14.49 00:28:43.577 clat percentiles (msec): 00:28:43.577 | 1.00th=[ 13], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:28:43.577 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 61], 00:28:43.577 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 79], 95.00th=[ 84], 00:28:43.577 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:28:43.577 | 99.99th=[ 109] 00:28:43.577 bw ( KiB/s): min= 720, max= 1296, per=3.75%, avg=1070.74, stdev=121.95, samples=19 00:28:43.577 iops : min= 180, max= 324, avg=267.68, stdev=30.49, samples=19 00:28:43.577 lat (msec) : 10=0.48%, 20=0.70%, 50=33.25%, 100=65.02%, 250=0.55% 00:28:43.577 cpu : usr=33.24%, sys=0.70%, ctx=905, majf=0, minf=9 00:28:43.577 IO depths : 1=2.7%, 2=6.3%, 4=17.4%, 8=63.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:28:43.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 issued rwts: total=2719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.577 filename0: (groupid=0, jobs=1): err= 0: pid=110017: Wed Nov 20 15:36:28 2024 00:28:43.577 read: IOPS=280, BW=1123KiB/s (1150kB/s)(11.0MiB/10011msec) 00:28:43.577 slat (usec): min=2, max=8061, avg=30.46, stdev=355.67 00:28:43.577 clat (msec): min=15, max=107, avg=56.80, stdev=14.28 00:28:43.577 lat (msec): min=15, max=107, avg=56.83, stdev=14.28 00:28:43.577 clat percentiles (msec): 00:28:43.577 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:28:43.577 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 60], 00:28:43.577 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 75], 95.00th=[ 84], 00:28:43.577 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 108], 99.95th=[ 108], 00:28:43.577 | 99.99th=[ 108] 00:28:43.577 bw ( KiB/s): min= 888, max= 1360, per=3.88%, avg=1107.37, stdev=110.93, samples=19 00:28:43.577 iops : min= 222, max= 340, avg=276.84, stdev=27.73, samples=19 00:28:43.577 lat (msec) : 20=0.43%, 50=37.58%, 100=61.25%, 250=0.75% 00:28:43.577 cpu : usr=32.31%, sys=0.86%, ctx=1104, majf=0, minf=9 00:28:43.577 IO depths : 1=1.1%, 2=3.1%, 4=11.6%, 8=71.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:43.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 issued rwts: total=2810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.577 filename0: (groupid=0, jobs=1): err= 0: pid=110018: Wed Nov 20 15:36:28 2024 00:28:43.577 read: IOPS=287, BW=1151KiB/s (1178kB/s)(11.2MiB/10006msec) 00:28:43.577 slat (usec): min=2, max=8055, avg=24.35, stdev=299.39 00:28:43.577 clat (msec): min=9, max=121, avg=55.44, stdev=16.54 00:28:43.577 lat (msec): min=9, max=121, avg=55.46, stdev=16.54 00:28:43.577 clat percentiles (msec): 00:28:43.577 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 46], 00:28:43.577 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 60], 00:28:43.577 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 85], 00:28:43.577 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 123], 99.95th=[ 123], 00:28:43.577 | 99.99th=[ 123] 00:28:43.577 bw ( KiB/s): min= 688, max= 1376, per=3.96%, avg=1128.53, stdev=171.07, samples=19 00:28:43.577 iops : min= 172, max= 344, avg=282.11, stdev=42.78, samples=19 00:28:43.577 lat (msec) : 10=0.56%, 20=0.56%, 50=42.95%, 100=53.79%, 250=2.15% 00:28:43.577 cpu : usr=32.42%, sys=0.81%, ctx=1051, majf=0, minf=9 00:28:43.577 IO depths : 1=1.5%, 2=3.8%, 4=12.6%, 8=70.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:28:43.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 issued rwts: total=2878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.577 filename0: (groupid=0, jobs=1): err= 0: pid=110019: Wed Nov 20 15:36:28 2024 00:28:43.577 read: IOPS=351, BW=1404KiB/s (1438kB/s)(13.8MiB/10033msec) 00:28:43.577 slat (usec): min=3, max=4033, avg=14.69, stdev=131.67 00:28:43.577 clat (usec): min=2287, max=96179, avg=45454.22, stdev=16828.97 00:28:43.577 lat (usec): min=2293, max=96195, avg=45468.91, stdev=16827.35 00:28:43.577 clat percentiles (usec): 00:28:43.577 | 1.00th=[ 2737], 5.00th=[15926], 10.00th=[29230], 20.00th=[33817], 00:28:43.577 | 30.00th=[36963], 40.00th=[40109], 50.00th=[44303], 60.00th=[47973], 00:28:43.577 | 70.00th=[53216], 80.00th=[58983], 90.00th=[68682], 95.00th=[72877], 00:28:43.577 | 99.00th=[86508], 99.50th=[87557], 99.90th=[95945], 99.95th=[95945], 00:28:43.577 | 99.99th=[95945] 00:28:43.577 bw ( KiB/s): min= 1024, max= 2817, per=4.92%, avg=1402.85, stdev=392.12, samples=20 00:28:43.577 iops : min= 256, max= 704, avg=350.70, stdev=97.98, samples=20 00:28:43.577 lat (msec) : 4=1.76%, 10=1.59%, 20=2.84%, 50=61.39%, 100=32.42% 00:28:43.577 cpu : usr=42.64%, sys=1.35%, ctx=1166, majf=0, minf=9 00:28:43.577 IO depths : 1=1.1%, 2=2.3%, 4=9.0%, 8=75.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:43.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 issued rwts: total=3522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.577 filename0: (groupid=0, jobs=1): err= 0: pid=110020: Wed Nov 20 15:36:28 2024 00:28:43.577 read: IOPS=277, BW=1109KiB/s (1136kB/s)(10.8MiB/10001msec) 00:28:43.577 slat (usec): min=3, max=8029, avg=24.51, stdev=263.56 00:28:43.577 clat (msec): min=9, max=112, avg=57.48, stdev=14.55 00:28:43.577 lat (msec): min=9, max=112, avg=57.50, stdev=14.55 00:28:43.577 clat percentiles (msec): 00:28:43.577 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 48], 00:28:43.577 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 59], 00:28:43.577 | 70.00th=[ 62], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 86], 00:28:43.577 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 112], 99.95th=[ 112], 00:28:43.577 | 99.99th=[ 112] 00:28:43.577 bw ( KiB/s): min= 816, max= 1200, per=3.83%, avg=1094.00, stdev=88.95, samples=19 00:28:43.577 iops : min= 204, max= 300, avg=273.47, stdev=22.26, samples=19 00:28:43.577 lat (msec) : 10=0.58%, 20=0.58%, 50=29.78%, 100=67.81%, 250=1.26% 00:28:43.577 cpu : usr=42.62%, sys=1.40%, ctx=1146, majf=0, minf=9 00:28:43.577 IO depths : 1=2.4%, 2=5.7%, 4=16.3%, 8=64.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:28:43.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 complete : 0=0.0%, 4=91.8%, 8=3.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.577 issued rwts: total=2774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.577 filename0: (groupid=0, jobs=1): err= 0: pid=110021: Wed Nov 20 15:36:28 2024 00:28:43.577 read: IOPS=343, BW=1373KiB/s (1406kB/s)(13.5MiB/10059msec) 00:28:43.577 slat (usec): min=5, max=7449, avg=16.21, stdev=190.78 00:28:43.577 clat (msec): min=2, max=119, avg=46.34, stdev=18.64 00:28:43.577 lat (msec): min=2, max=120, avg=46.36, stdev=18.64 00:28:43.577 clat percentiles (msec): 00:28:43.577 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 27], 20.00th=[ 33], 00:28:43.577 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 46], 60.00th=[ 50], 00:28:43.577 | 70.00th=[ 56], 80.00th=[ 61], 90.00th=[ 69], 95.00th=[ 81], 00:28:43.577 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 121], 99.95th=[ 121], 00:28:43.578 | 99.99th=[ 121] 00:28:43.578 bw ( KiB/s): min= 720, max= 2909, per=4.82%, avg=1375.45, stdev=414.42, samples=20 00:28:43.578 iops : min= 180, max= 727, avg=343.85, stdev=103.56, samples=20 00:28:43.578 lat (msec) : 4=1.33%, 10=2.84%, 20=2.49%, 50=55.94%, 100=36.94% 00:28:43.578 lat (msec) : 250=0.46% 00:28:43.578 cpu : usr=39.18%, sys=1.18%, ctx=1244, majf=0, minf=9 00:28:43.578 IO depths : 1=1.1%, 2=2.5%, 4=9.9%, 8=74.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 issued rwts: total=3454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.578 filename1: (groupid=0, jobs=1): err= 0: pid=110022: Wed Nov 20 15:36:28 2024 00:28:43.578 read: IOPS=315, BW=1262KiB/s (1292kB/s)(12.4MiB/10044msec) 00:28:43.578 slat (usec): min=5, max=8023, avg=17.35, stdev=215.02 00:28:43.578 clat (msec): min=2, max=122, avg=50.56, stdev=19.59 00:28:43.578 lat (msec): min=2, max=122, avg=50.58, stdev=19.59 00:28:43.578 clat percentiles (msec): 00:28:43.578 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 29], 20.00th=[ 36], 00:28:43.578 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 57], 00:28:43.578 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 83], 00:28:43.578 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 124], 99.95th=[ 124], 00:28:43.578 | 99.99th=[ 124] 00:28:43.578 bw ( KiB/s): min= 776, max= 2719, per=4.42%, avg=1260.35, stdev=371.82, samples=20 00:28:43.578 iops : min= 194, max= 679, avg=315.05, stdev=92.80, samples=20 00:28:43.578 lat (msec) : 4=2.02%, 10=2.52%, 20=2.52%, 50=47.65%, 100=43.99% 00:28:43.578 lat (msec) : 250=1.29% 00:28:43.578 cpu : usr=35.24%, sys=1.12%, ctx=1005, majf=0, minf=0 00:28:43.578 IO depths : 1=1.3%, 2=3.0%, 4=10.9%, 8=72.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:28:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 issued rwts: total=3169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.578 filename1: (groupid=0, jobs=1): err= 0: pid=110023: Wed Nov 20 15:36:28 2024 00:28:43.578 read: IOPS=305, BW=1224KiB/s (1253kB/s)(12.0MiB/10017msec) 00:28:43.578 slat (usec): min=3, max=8021, avg=19.97, stdev=232.23 00:28:43.578 clat (msec): min=13, max=119, avg=52.15, stdev=15.10 00:28:43.578 lat (msec): min=13, max=119, avg=52.17, stdev=15.10 00:28:43.578 clat percentiles (msec): 00:28:43.578 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 39], 00:28:43.578 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 56], 00:28:43.578 | 70.00th=[ 58], 80.00th=[ 65], 90.00th=[ 72], 95.00th=[ 81], 00:28:43.578 | 99.00th=[ 86], 99.50th=[ 96], 99.90th=[ 121], 99.95th=[ 121], 00:28:43.578 | 99.99th=[ 121] 00:28:43.578 bw ( KiB/s): min= 896, max= 1708, per=4.28%, avg=1220.00, stdev=178.30, samples=19 00:28:43.578 iops : min= 224, max= 427, avg=305.00, stdev=44.58, samples=19 00:28:43.578 lat (msec) : 20=1.08%, 50=46.00%, 100=52.76%, 250=0.16% 00:28:43.578 cpu : usr=39.30%, sys=1.05%, ctx=1349, majf=0, minf=9 00:28:43.578 IO depths : 1=1.8%, 2=4.6%, 4=14.3%, 8=67.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:28:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 complete : 0=0.0%, 4=91.3%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 issued rwts: total=3065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.578 filename1: (groupid=0, jobs=1): err= 0: pid=110024: Wed Nov 20 15:36:28 2024 00:28:43.578 read: IOPS=289, BW=1157KiB/s (1185kB/s)(11.3MiB/10007msec) 00:28:43.578 slat (usec): min=3, max=7413, avg=18.20, stdev=173.94 00:28:43.578 clat (msec): min=7, max=120, avg=55.15, stdev=14.39 00:28:43.578 lat (msec): min=7, max=120, avg=55.17, stdev=14.39 00:28:43.578 clat percentiles (msec): 00:28:43.578 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 47], 00:28:43.578 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 57], 00:28:43.578 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 82], 00:28:43.578 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 121], 99.95th=[ 121], 00:28:43.578 | 99.99th=[ 121] 00:28:43.578 bw ( KiB/s): min= 824, max= 1386, per=3.99%, avg=1138.21, stdev=143.94, samples=19 00:28:43.578 iops : min= 206, max= 346, avg=284.53, stdev=35.94, samples=19 00:28:43.578 lat (msec) : 10=0.55%, 20=0.62%, 50=38.76%, 100=59.65%, 250=0.41% 00:28:43.578 cpu : usr=42.17%, sys=1.09%, ctx=1187, majf=0, minf=9 00:28:43.578 IO depths : 1=2.7%, 2=6.5%, 4=17.6%, 8=63.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:28:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 issued rwts: total=2895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.578 filename1: (groupid=0, jobs=1): err= 0: pid=110025: Wed Nov 20 15:36:28 2024 00:28:43.578 read: IOPS=291, BW=1164KiB/s (1192kB/s)(11.4MiB/10010msec) 00:28:43.578 slat (usec): min=5, max=8019, avg=18.71, stdev=211.09 00:28:43.578 clat (msec): min=9, max=109, avg=54.80, stdev=16.95 00:28:43.578 lat (msec): min=9, max=109, avg=54.82, stdev=16.95 00:28:43.578 clat percentiles (msec): 00:28:43.578 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 43], 00:28:43.578 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 52], 60.00th=[ 59], 00:28:43.578 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 74], 95.00th=[ 85], 00:28:43.578 | 99.00th=[ 97], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 110], 00:28:43.578 | 99.99th=[ 110] 00:28:43.578 bw ( KiB/s): min= 944, max= 1539, per=4.07%, avg=1162.90, stdev=137.17, samples=20 00:28:43.578 iops : min= 236, max= 384, avg=290.65, stdev=34.20, samples=20 00:28:43.578 lat (msec) : 10=0.24%, 20=1.89%, 50=45.20%, 100=51.85%, 250=0.82% 00:28:43.578 cpu : usr=32.42%, sys=0.90%, ctx=1087, majf=0, minf=9 00:28:43.578 IO depths : 1=1.6%, 2=4.2%, 4=13.8%, 8=69.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:28:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 issued rwts: total=2914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.578 filename1: (groupid=0, jobs=1): err= 0: pid=110026: Wed Nov 20 15:36:28 2024 00:28:43.578 read: IOPS=332, BW=1330KiB/s (1362kB/s)(13.0MiB/10010msec) 00:28:43.578 slat (usec): min=3, max=4049, avg=17.59, stdev=151.29 00:28:43.578 clat (msec): min=7, max=110, avg=48.00, stdev=15.79 00:28:43.578 lat (msec): min=7, max=110, avg=48.02, stdev=15.79 00:28:43.578 clat percentiles (msec): 00:28:43.578 | 1.00th=[ 20], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 35], 00:28:43.578 | 30.00th=[ 39], 40.00th=[ 42], 50.00th=[ 48], 60.00th=[ 50], 00:28:43.578 | 70.00th=[ 56], 80.00th=[ 59], 90.00th=[ 70], 95.00th=[ 75], 00:28:43.578 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:28:43.578 | 99.99th=[ 111] 00:28:43.578 bw ( KiB/s): min= 896, max= 1760, per=4.64%, avg=1324.80, stdev=248.87, samples=20 00:28:43.578 iops : min= 224, max= 440, avg=331.20, stdev=62.22, samples=20 00:28:43.578 lat (msec) : 10=0.48%, 20=0.96%, 50=59.86%, 100=37.44%, 250=1.26% 00:28:43.578 cpu : usr=42.45%, sys=0.98%, ctx=1224, majf=0, minf=9 00:28:43.578 IO depths : 1=1.6%, 2=3.5%, 4=11.8%, 8=71.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:28:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.578 filename1: (groupid=0, jobs=1): err= 0: pid=110027: Wed Nov 20 15:36:28 2024 00:28:43.578 read: IOPS=278, BW=1115KiB/s (1142kB/s)(10.9MiB/10006msec) 00:28:43.578 slat (usec): min=2, max=8025, avg=19.29, stdev=186.13 00:28:43.578 clat (msec): min=9, max=119, avg=57.24, stdev=15.97 00:28:43.578 lat (msec): min=9, max=119, avg=57.26, stdev=15.97 00:28:43.578 clat percentiles (msec): 00:28:43.578 | 1.00th=[ 13], 5.00th=[ 33], 10.00th=[ 41], 20.00th=[ 47], 00:28:43.578 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 58], 00:28:43.578 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 77], 95.00th=[ 87], 00:28:43.578 | 99.00th=[ 111], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:28:43.578 | 99.99th=[ 121] 00:28:43.578 bw ( KiB/s): min= 640, max= 1280, per=3.86%, avg=1100.63, stdev=132.91, samples=19 00:28:43.578 iops : min= 160, max= 320, avg=275.16, stdev=33.23, samples=19 00:28:43.578 lat (msec) : 10=0.57%, 20=0.57%, 50=32.97%, 100=64.19%, 250=1.68% 00:28:43.578 cpu : usr=42.42%, sys=1.31%, ctx=1405, majf=0, minf=9 00:28:43.578 IO depths : 1=2.3%, 2=5.6%, 4=15.3%, 8=65.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:28:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 complete : 0=0.0%, 4=91.6%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.578 issued rwts: total=2790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.578 filename1: (groupid=0, jobs=1): err= 0: pid=110028: Wed Nov 20 15:36:28 2024 00:28:43.578 read: IOPS=283, BW=1134KiB/s (1161kB/s)(11.1MiB/10026msec) 00:28:43.578 slat (usec): min=5, max=10023, avg=18.49, stdev=240.57 00:28:43.578 clat (msec): min=18, max=108, avg=56.32, stdev=14.83 00:28:43.578 lat (msec): min=18, max=108, avg=56.33, stdev=14.83 00:28:43.578 clat percentiles (msec): 00:28:43.578 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 47], 00:28:43.578 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 60], 00:28:43.578 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 74], 95.00th=[ 84], 00:28:43.578 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 109], 00:28:43.578 | 99.99th=[ 109] 00:28:43.578 bw ( KiB/s): min= 752, max= 1328, per=3.94%, avg=1123.32, stdev=138.02, samples=19 00:28:43.578 iops : min= 188, max= 332, avg=280.79, stdev=34.52, samples=19 00:28:43.578 lat (msec) : 20=0.39%, 50=41.40%, 100=57.19%, 250=1.02% 00:28:43.578 cpu : usr=35.37%, sys=0.95%, ctx=986, majf=0, minf=9 00:28:43.578 IO depths : 1=1.4%, 2=3.6%, 4=11.7%, 8=71.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:28:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 complete : 0=0.0%, 4=90.7%, 8=4.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 issued rwts: total=2843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.579 filename1: (groupid=0, jobs=1): err= 0: pid=110029: Wed Nov 20 15:36:28 2024 00:28:43.579 read: IOPS=275, BW=1103KiB/s (1130kB/s)(10.8MiB/10006msec) 00:28:43.579 slat (usec): min=5, max=8021, avg=23.66, stdev=284.70 00:28:43.579 clat (msec): min=24, max=108, avg=57.81, stdev=14.63 00:28:43.579 lat (msec): min=24, max=108, avg=57.83, stdev=14.64 00:28:43.579 clat percentiles (msec): 00:28:43.579 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 48], 00:28:43.579 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 61], 00:28:43.579 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 85], 00:28:43.579 | 99.00th=[ 96], 99.50th=[ 103], 99.90th=[ 109], 99.95th=[ 109], 00:28:43.579 | 99.99th=[ 109] 00:28:43.579 bw ( KiB/s): min= 768, max= 1448, per=3.83%, avg=1092.21, stdev=160.36, samples=19 00:28:43.579 iops : min= 192, max= 362, avg=273.05, stdev=40.09, samples=19 00:28:43.579 lat (msec) : 50=33.59%, 100=65.65%, 250=0.76% 00:28:43.579 cpu : usr=40.51%, sys=1.14%, ctx=1379, majf=0, minf=9 00:28:43.579 IO depths : 1=2.0%, 2=4.4%, 4=12.4%, 8=69.2%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:43.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 complete : 0=0.0%, 4=91.0%, 8=4.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 issued rwts: total=2760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.579 filename2: (groupid=0, jobs=1): err= 0: pid=110030: Wed Nov 20 15:36:28 2024 00:28:43.579 read: IOPS=308, BW=1236KiB/s (1266kB/s)(12.1MiB/10040msec) 00:28:43.579 slat (usec): min=5, max=8043, avg=20.35, stdev=253.91 00:28:43.579 clat (msec): min=9, max=102, avg=51.56, stdev=16.03 00:28:43.579 lat (msec): min=9, max=102, avg=51.58, stdev=16.03 00:28:43.579 clat percentiles (msec): 00:28:43.579 | 1.00th=[ 12], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 37], 00:28:43.579 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 55], 00:28:43.579 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 83], 00:28:43.579 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 104], 00:28:43.579 | 99.99th=[ 104] 00:28:43.579 bw ( KiB/s): min= 784, max= 1792, per=4.34%, avg=1237.20, stdev=213.41, samples=20 00:28:43.579 iops : min= 196, max= 448, avg=309.30, stdev=53.35, samples=20 00:28:43.579 lat (msec) : 10=0.52%, 20=1.55%, 50=48.29%, 100=49.36%, 250=0.29% 00:28:43.579 cpu : usr=37.86%, sys=1.06%, ctx=1161, majf=0, minf=9 00:28:43.579 IO depths : 1=1.9%, 2=4.4%, 4=13.4%, 8=68.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:28:43.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 issued rwts: total=3102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.579 filename2: (groupid=0, jobs=1): err= 0: pid=110031: Wed Nov 20 15:36:28 2024 00:28:43.579 read: IOPS=276, BW=1106KiB/s (1132kB/s)(10.8MiB/10005msec) 00:28:43.579 slat (usec): min=2, max=8062, avg=22.24, stdev=264.80 00:28:43.579 clat (msec): min=5, max=121, avg=57.72, stdev=16.14 00:28:43.579 lat (msec): min=5, max=121, avg=57.74, stdev=16.14 00:28:43.579 clat percentiles (msec): 00:28:43.579 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 48], 00:28:43.579 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 00:28:43.579 | 70.00th=[ 62], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 85], 00:28:43.579 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 122], 99.95th=[ 122], 00:28:43.579 | 99.99th=[ 122] 00:28:43.579 bw ( KiB/s): min= 728, max= 1304, per=3.79%, avg=1081.26, stdev=115.26, samples=19 00:28:43.579 iops : min= 182, max= 326, avg=270.32, stdev=28.82, samples=19 00:28:43.579 lat (msec) : 10=0.65%, 20=0.94%, 50=38.72%, 100=58.60%, 250=1.08% 00:28:43.579 cpu : usr=32.55%, sys=0.71%, ctx=873, majf=0, minf=9 00:28:43.579 IO depths : 1=1.0%, 2=3.0%, 4=11.3%, 8=71.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:28:43.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 complete : 0=0.0%, 4=90.7%, 8=5.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 issued rwts: total=2766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.579 filename2: (groupid=0, jobs=1): err= 0: pid=110032: Wed Nov 20 15:36:28 2024 00:28:43.579 read: IOPS=281, BW=1128KiB/s (1155kB/s)(11.0MiB/10008msec) 00:28:43.579 slat (usec): min=4, max=8061, avg=20.01, stdev=227.36 00:28:43.579 clat (msec): min=10, max=116, avg=56.56, stdev=14.16 00:28:43.579 lat (msec): min=10, max=116, avg=56.58, stdev=14.16 00:28:43.579 clat percentiles (msec): 00:28:43.579 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:28:43.579 | 30.00th=[ 49], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 57], 00:28:43.579 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 77], 95.00th=[ 84], 00:28:43.579 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 117], 99.95th=[ 117], 00:28:43.579 | 99.99th=[ 117] 00:28:43.579 bw ( KiB/s): min= 896, max= 1280, per=3.91%, avg=1114.11, stdev=113.29, samples=19 00:28:43.579 iops : min= 224, max= 320, avg=278.53, stdev=28.32, samples=19 00:28:43.579 lat (msec) : 20=0.64%, 50=36.89%, 100=61.91%, 250=0.57% 00:28:43.579 cpu : usr=38.67%, sys=1.26%, ctx=1145, majf=0, minf=9 00:28:43.579 IO depths : 1=2.2%, 2=5.4%, 4=15.6%, 8=66.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:28:43.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 issued rwts: total=2822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.579 filename2: (groupid=0, jobs=1): err= 0: pid=110033: Wed Nov 20 15:36:28 2024 00:28:43.579 read: IOPS=333, BW=1336KiB/s (1368kB/s)(13.1MiB/10011msec) 00:28:43.579 slat (usec): min=3, max=5033, avg=14.07, stdev=111.82 00:28:43.579 clat (msec): min=11, max=120, avg=47.81, stdev=15.60 00:28:43.579 lat (msec): min=11, max=120, avg=47.82, stdev=15.60 00:28:43.579 clat percentiles (msec): 00:28:43.579 | 1.00th=[ 13], 5.00th=[ 25], 10.00th=[ 31], 20.00th=[ 36], 00:28:43.579 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 50], 00:28:43.579 | 70.00th=[ 55], 80.00th=[ 61], 90.00th=[ 69], 95.00th=[ 72], 00:28:43.579 | 99.00th=[ 95], 99.50th=[ 104], 99.90th=[ 121], 99.95th=[ 121], 00:28:43.579 | 99.99th=[ 121] 00:28:43.579 bw ( KiB/s): min= 992, max= 1904, per=4.68%, avg=1336.26, stdev=197.40, samples=19 00:28:43.579 iops : min= 248, max= 476, avg=334.05, stdev=49.35, samples=19 00:28:43.579 lat (msec) : 20=2.81%, 50=58.42%, 100=38.26%, 250=0.51% 00:28:43.579 cpu : usr=41.39%, sys=1.13%, ctx=1170, majf=0, minf=9 00:28:43.579 IO depths : 1=1.3%, 2=3.1%, 4=11.0%, 8=72.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:43.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 issued rwts: total=3343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.579 filename2: (groupid=0, jobs=1): err= 0: pid=110034: Wed Nov 20 15:36:28 2024 00:28:43.579 read: IOPS=283, BW=1133KiB/s (1161kB/s)(11.1MiB/10009msec) 00:28:43.579 slat (usec): min=3, max=8066, avg=18.17, stdev=181.81 00:28:43.579 clat (msec): min=9, max=107, avg=56.30, stdev=14.34 00:28:43.579 lat (msec): min=9, max=107, avg=56.31, stdev=14.34 00:28:43.579 clat percentiles (msec): 00:28:43.579 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 48], 00:28:43.579 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 59], 00:28:43.579 | 70.00th=[ 62], 80.00th=[ 66], 90.00th=[ 75], 95.00th=[ 84], 00:28:43.579 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 108], 99.95th=[ 108], 00:28:43.579 | 99.99th=[ 108] 00:28:43.579 bw ( KiB/s): min= 896, max= 1376, per=3.93%, avg=1120.00, stdev=124.54, samples=19 00:28:43.579 iops : min= 224, max= 344, avg=280.00, stdev=31.13, samples=19 00:28:43.579 lat (msec) : 10=0.56%, 20=0.56%, 50=30.47%, 100=68.12%, 250=0.28% 00:28:43.579 cpu : usr=41.15%, sys=1.42%, ctx=1307, majf=0, minf=9 00:28:43.579 IO depths : 1=2.8%, 2=6.5%, 4=16.8%, 8=63.6%, 16=10.3%, 32=0.0%, >=64=0.0% 00:28:43.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 complete : 0=0.0%, 4=92.0%, 8=2.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 issued rwts: total=2836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.579 filename2: (groupid=0, jobs=1): err= 0: pid=110035: Wed Nov 20 15:36:28 2024 00:28:43.579 read: IOPS=287, BW=1150KiB/s (1178kB/s)(11.2MiB/10007msec) 00:28:43.579 slat (usec): min=3, max=8038, avg=22.85, stdev=299.08 00:28:43.579 clat (msec): min=8, max=120, avg=55.49, stdev=14.98 00:28:43.579 lat (msec): min=8, max=120, avg=55.51, stdev=14.98 00:28:43.579 clat percentiles (msec): 00:28:43.579 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 46], 00:28:43.579 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 57], 60.00th=[ 61], 00:28:43.579 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 85], 00:28:43.579 | 99.00th=[ 96], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:28:43.579 | 99.99th=[ 121] 00:28:43.579 bw ( KiB/s): min= 768, max= 1328, per=3.94%, avg=1125.89, stdev=136.36, samples=19 00:28:43.579 iops : min= 192, max= 332, avg=281.47, stdev=34.09, samples=19 00:28:43.579 lat (msec) : 10=0.07%, 50=45.97%, 100=52.99%, 250=0.97% 00:28:43.579 cpu : usr=32.53%, sys=0.62%, ctx=871, majf=0, minf=9 00:28:43.579 IO depths : 1=1.6%, 2=3.6%, 4=12.5%, 8=70.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:43.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.579 issued rwts: total=2878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.579 filename2: (groupid=0, jobs=1): err= 0: pid=110036: Wed Nov 20 15:36:28 2024 00:28:43.579 read: IOPS=329, BW=1319KiB/s (1351kB/s)(12.9MiB/10011msec) 00:28:43.579 slat (usec): min=5, max=7032, avg=18.08, stdev=172.59 00:28:43.579 clat (msec): min=9, max=128, avg=48.37, stdev=15.82 00:28:43.579 lat (msec): min=9, max=128, avg=48.38, stdev=15.82 00:28:43.579 clat percentiles (msec): 00:28:43.579 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 36], 00:28:43.579 | 30.00th=[ 40], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 51], 00:28:43.580 | 70.00th=[ 56], 80.00th=[ 60], 90.00th=[ 68], 95.00th=[ 73], 00:28:43.580 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 129], 99.95th=[ 129], 00:28:43.580 | 99.99th=[ 129] 00:28:43.580 bw ( KiB/s): min= 920, max= 1840, per=4.61%, avg=1314.40, stdev=195.44, samples=20 00:28:43.580 iops : min= 230, max= 460, avg=328.60, stdev=48.86, samples=20 00:28:43.580 lat (msec) : 10=0.48%, 20=2.91%, 50=54.72%, 100=41.40%, 250=0.48% 00:28:43.580 cpu : usr=43.06%, sys=1.21%, ctx=1328, majf=0, minf=9 00:28:43.580 IO depths : 1=1.8%, 2=3.7%, 4=12.2%, 8=70.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:28:43.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.580 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.580 issued rwts: total=3302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.580 filename2: (groupid=0, jobs=1): err= 0: pid=110037: Wed Nov 20 15:36:28 2024 00:28:43.580 read: IOPS=275, BW=1102KiB/s (1129kB/s)(10.8MiB/10005msec) 00:28:43.580 slat (usec): min=2, max=8047, avg=21.01, stdev=238.19 00:28:43.580 clat (msec): min=5, max=119, avg=57.92, stdev=15.35 00:28:43.580 lat (msec): min=5, max=119, avg=57.94, stdev=15.35 00:28:43.580 clat percentiles (msec): 00:28:43.580 | 1.00th=[ 13], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 48], 00:28:43.580 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 61], 00:28:43.580 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 78], 95.00th=[ 85], 00:28:43.580 | 99.00th=[ 97], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:28:43.580 | 99.99th=[ 121] 00:28:43.580 bw ( KiB/s): min= 712, max= 1208, per=3.76%, avg=1071.58, stdev=120.72, samples=19 00:28:43.580 iops : min= 178, max= 302, avg=267.89, stdev=30.18, samples=19 00:28:43.580 lat (msec) : 10=0.25%, 20=1.16%, 50=28.44%, 100=69.50%, 250=0.65% 00:28:43.580 cpu : usr=36.13%, sys=0.93%, ctx=1105, majf=0, minf=10 00:28:43.580 IO depths : 1=1.9%, 2=5.3%, 4=16.0%, 8=65.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:28:43.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.580 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.580 issued rwts: total=2757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.580 00:28:43.580 Run status group 0 (all jobs): 00:28:43.580 READ: bw=27.8MiB/s (29.2MB/s), 1087KiB/s-1404KiB/s (1113kB/s-1438kB/s), io=280MiB (294MB), run=10001-10059msec 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 bdev_null0 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 [2024-11-20 15:36:28.548438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 bdev_null1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.580 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.580 { 00:28:43.580 "params": { 00:28:43.580 "name": "Nvme$subsystem", 00:28:43.580 "trtype": "$TEST_TRANSPORT", 00:28:43.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.580 "adrfam": "ipv4", 00:28:43.580 "trsvcid": "$NVMF_PORT", 00:28:43.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.581 "hdgst": ${hdgst:-false}, 00:28:43.581 "ddgst": ${ddgst:-false} 00:28:43.581 }, 00:28:43.581 "method": "bdev_nvme_attach_controller" 00:28:43.581 } 00:28:43.581 EOF 00:28:43.581 )") 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.581 { 00:28:43.581 "params": { 00:28:43.581 "name": "Nvme$subsystem", 00:28:43.581 "trtype": "$TEST_TRANSPORT", 00:28:43.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.581 "adrfam": "ipv4", 00:28:43.581 "trsvcid": "$NVMF_PORT", 00:28:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.581 "hdgst": ${hdgst:-false}, 00:28:43.581 "ddgst": ${ddgst:-false} 00:28:43.581 }, 00:28:43.581 "method": "bdev_nvme_attach_controller" 00:28:43.581 } 00:28:43.581 EOF 00:28:43.581 )") 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:43.581 "params": { 00:28:43.581 "name": "Nvme0", 00:28:43.581 "trtype": "tcp", 00:28:43.581 "traddr": "10.0.0.3", 00:28:43.581 "adrfam": "ipv4", 00:28:43.581 "trsvcid": "4420", 00:28:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:43.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:43.581 "hdgst": false, 00:28:43.581 "ddgst": false 00:28:43.581 }, 00:28:43.581 "method": "bdev_nvme_attach_controller" 00:28:43.581 },{ 00:28:43.581 "params": { 00:28:43.581 "name": "Nvme1", 00:28:43.581 "trtype": "tcp", 00:28:43.581 "traddr": "10.0.0.3", 00:28:43.581 "adrfam": "ipv4", 00:28:43.581 "trsvcid": "4420", 00:28:43.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.581 "hdgst": false, 00:28:43.581 "ddgst": false 00:28:43.581 }, 00:28:43.581 "method": "bdev_nvme_attach_controller" 00:28:43.581 }' 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:43.581 15:36:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.581 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:43.581 ... 00:28:43.581 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:43.581 ... 00:28:43.581 fio-3.35 00:28:43.581 Starting 4 threads 00:28:48.850 00:28:48.850 filename0: (groupid=0, jobs=1): err= 0: pid=110169: Wed Nov 20 15:36:34 2024 00:28:48.850 read: IOPS=2561, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:28:48.850 slat (usec): min=2, max=395, avg= 9.19, stdev= 6.52 00:28:48.850 clat (usec): min=378, max=4773, avg=3078.25, stdev=219.11 00:28:48.850 lat (usec): min=384, max=4790, avg=3087.44, stdev=219.73 00:28:48.850 clat percentiles (usec): 00:28:48.850 | 1.00th=[ 2474], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3032], 00:28:48.850 | 30.00th=[ 3064], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3097], 00:28:48.850 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3195], 95.00th=[ 3228], 00:28:48.850 | 99.00th=[ 3359], 99.50th=[ 3425], 99.90th=[ 3884], 99.95th=[ 4228], 00:28:48.850 | 99.99th=[ 4752] 00:28:48.850 bw ( KiB/s): min=20224, max=21248, per=25.14%, avg=20522.67, stdev=306.93, samples=9 00:28:48.850 iops : min= 2528, max= 2656, avg=2565.33, stdev=38.37, samples=9 00:28:48.850 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.68% 00:28:48.850 lat (msec) : 2=0.06%, 4=99.14%, 10=0.09% 00:28:48.850 cpu : usr=94.44%, sys=4.62%, ctx=10, majf=0, minf=9 00:28:48.850 IO depths : 1=9.8%, 2=24.1%, 4=50.9%, 8=15.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:48.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.850 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.850 issued rwts: total=12808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.850 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:48.850 filename0: (groupid=0, jobs=1): err= 0: pid=110170: Wed Nov 20 15:36:34 2024 00:28:48.850 read: IOPS=2547, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5002msec) 00:28:48.850 slat (nsec): min=5565, max=86144, avg=16225.24, stdev=8735.12 00:28:48.850 clat (usec): min=1860, max=4655, avg=3053.13, stdev=104.40 00:28:48.850 lat (usec): min=1897, max=4693, avg=3069.35, stdev=106.60 00:28:48.850 clat percentiles (usec): 00:28:48.850 | 1.00th=[ 2868], 5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 2999], 00:28:48.850 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:28:48.850 | 70.00th=[ 3097], 80.00th=[ 3097], 90.00th=[ 3163], 95.00th=[ 3195], 00:28:48.850 | 99.00th=[ 3326], 99.50th=[ 3359], 99.90th=[ 4146], 99.95th=[ 4490], 00:28:48.850 | 99.99th=[ 4621] 00:28:48.850 bw ( KiB/s): min=20096, max=20608, per=24.96%, avg=20380.44, stdev=178.49, samples=9 00:28:48.850 iops : min= 2512, max= 2576, avg=2547.56, stdev=22.31, samples=9 00:28:48.850 lat (msec) : 2=0.01%, 4=99.86%, 10=0.13% 00:28:48.850 cpu : usr=94.18%, sys=4.50%, ctx=8, majf=0, minf=9 00:28:48.850 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:48.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.850 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.850 issued rwts: total=12744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.850 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:48.850 filename1: (groupid=0, jobs=1): err= 0: pid=110171: Wed Nov 20 15:36:34 2024 00:28:48.850 read: IOPS=2549, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5002msec) 00:28:48.850 slat (nsec): min=5597, max=67460, avg=9850.08, stdev=5224.26 00:28:48.850 clat (usec): min=1820, max=4814, avg=3090.56, stdev=118.89 00:28:48.850 lat (usec): min=1827, max=4844, avg=3100.41, stdev=119.46 00:28:48.850 clat percentiles (usec): 00:28:48.850 | 1.00th=[ 2835], 5.00th=[ 2999], 10.00th=[ 2999], 20.00th=[ 3032], 00:28:48.850 | 30.00th=[ 3064], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3097], 00:28:48.850 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3195], 95.00th=[ 3228], 00:28:48.850 | 99.00th=[ 3359], 99.50th=[ 3687], 99.90th=[ 3818], 99.95th=[ 4293], 00:28:48.850 | 99.99th=[ 4293] 00:28:48.850 bw ( KiB/s): min=20136, max=20608, per=24.97%, avg=20384.89, stdev=170.85, samples=9 00:28:48.850 iops : min= 2517, max= 2576, avg=2548.11, stdev=21.36, samples=9 00:28:48.850 lat (msec) : 2=0.06%, 4=99.87%, 10=0.07% 00:28:48.850 cpu : usr=94.54%, sys=4.52%, ctx=9, majf=0, minf=9 00:28:48.850 IO depths : 1=10.5%, 2=24.5%, 4=50.5%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:48.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.850 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.850 issued rwts: total=12752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.850 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:48.850 filename1: (groupid=0, jobs=1): err= 0: pid=110172: Wed Nov 20 15:36:34 2024 00:28:48.850 read: IOPS=2547, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5002msec) 00:28:48.850 slat (nsec): min=5692, max=86900, avg=15531.92, stdev=8193.22 00:28:48.850 clat (usec): min=1700, max=6162, avg=3061.66, stdev=140.05 00:28:48.850 lat (usec): min=1706, max=6194, avg=3077.19, stdev=141.34 00:28:48.850 clat percentiles (usec): 00:28:48.850 | 1.00th=[ 2835], 5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 2999], 00:28:48.850 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3064], 00:28:48.850 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3163], 95.00th=[ 3195], 00:28:48.850 | 99.00th=[ 3359], 99.50th=[ 3556], 99.90th=[ 4621], 99.95th=[ 5735], 00:28:48.850 | 99.99th=[ 5800] 00:28:48.850 bw ( KiB/s): min=20008, max=20608, per=24.95%, avg=20370.67, stdev=194.98, samples=9 00:28:48.850 iops : min= 2501, max= 2576, avg=2546.33, stdev=24.37, samples=9 00:28:48.850 lat (msec) : 2=0.10%, 4=99.67%, 10=0.23% 00:28:48.850 cpu : usr=94.12%, sys=4.54%, ctx=8, majf=0, minf=10 00:28:48.850 IO depths : 1=9.4%, 2=25.0%, 4=50.0%, 8=15.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:48.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.850 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.850 issued rwts: total=12744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.850 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:48.850 00:28:48.850 Run status group 0 (all jobs): 00:28:48.850 READ: bw=79.7MiB/s (83.6MB/s), 19.9MiB/s-20.0MiB/s (20.9MB/s-21.0MB/s), io=399MiB (418MB), run=5001-5002msec 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.850 ************************************ 00:28:48.850 END TEST fio_dif_rand_params 00:28:48.850 ************************************ 00:28:48.850 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.850 00:28:48.850 real 0m24.064s 00:28:48.850 user 2m6.488s 00:28:48.850 sys 0m5.498s 00:28:48.851 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.851 15:36:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.851 15:36:34 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:48.851 15:36:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:48.851 15:36:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.851 15:36:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:48.851 ************************************ 00:28:48.851 START TEST fio_dif_digest 00:28:48.851 ************************************ 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.851 bdev_null0 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.851 [2024-11-20 15:36:34.919625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:48.851 { 00:28:48.851 "params": { 00:28:48.851 "name": "Nvme$subsystem", 00:28:48.851 "trtype": "$TEST_TRANSPORT", 00:28:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.851 "adrfam": "ipv4", 00:28:48.851 "trsvcid": "$NVMF_PORT", 00:28:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.851 "hdgst": ${hdgst:-false}, 00:28:48.851 "ddgst": ${ddgst:-false} 00:28:48.851 }, 00:28:48.851 "method": "bdev_nvme_attach_controller" 00:28:48.851 } 00:28:48.851 EOF 00:28:48.851 )") 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:48.851 "params": { 00:28:48.851 "name": "Nvme0", 00:28:48.851 "trtype": "tcp", 00:28:48.851 "traddr": "10.0.0.3", 00:28:48.851 "adrfam": "ipv4", 00:28:48.851 "trsvcid": "4420", 00:28:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.851 "hdgst": true, 00:28:48.851 "ddgst": true 00:28:48.851 }, 00:28:48.851 "method": "bdev_nvme_attach_controller" 00:28:48.851 }' 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:48.851 15:36:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:48.851 15:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:48.851 15:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:48.851 15:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:48.851 15:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.851 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:48.851 ... 00:28:48.851 fio-3.35 00:28:48.851 Starting 3 threads 00:29:01.067 00:29:01.067 filename0: (groupid=0, jobs=1): err= 0: pid=110279: Wed Nov 20 15:36:45 2024 00:29:01.067 read: IOPS=244, BW=30.5MiB/s (32.0MB/s)(306MiB/10004msec) 00:29:01.067 slat (nsec): min=6114, max=59893, avg=26164.29, stdev=7939.31 00:29:01.067 clat (usec): min=6012, max=15520, avg=12245.43, stdev=2164.11 00:29:01.067 lat (usec): min=6023, max=15548, avg=12271.59, stdev=2165.69 00:29:01.067 clat percentiles (usec): 00:29:01.067 | 1.00th=[ 7504], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[ 9634], 00:29:01.067 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:29:01.067 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14484], 00:29:01.067 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15401], 99.95th=[15533], 00:29:01.067 | 99.99th=[15533] 00:29:01.067 bw ( KiB/s): min=28416, max=37376, per=30.37%, avg=31356.42, stdev=2251.43, samples=19 00:29:01.067 iops : min= 222, max= 292, avg=244.95, stdev=17.61, samples=19 00:29:01.067 lat (msec) : 10=20.90%, 20=79.10% 00:29:01.067 cpu : usr=94.35%, sys=4.62%, ctx=20, majf=0, minf=0 00:29:01.067 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:01.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.068 issued rwts: total=2445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:01.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:01.068 filename0: (groupid=0, jobs=1): err= 0: pid=110280: Wed Nov 20 15:36:45 2024 00:29:01.068 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(337MiB/10007msec) 00:29:01.068 slat (nsec): min=5684, max=52431, avg=15027.98, stdev=6480.79 00:29:01.068 clat (usec): min=4420, max=92144, avg=11125.68, stdev=8059.98 00:29:01.068 lat (usec): min=4441, max=92165, avg=11140.70, stdev=8059.99 00:29:01.068 clat percentiles (usec): 00:29:01.068 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 8979], 00:29:01.068 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:29:01.068 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10552], 95.00th=[11076], 00:29:01.068 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[90702], 00:29:01.068 | 99.99th=[91751] 00:29:01.068 bw ( KiB/s): min=27136, max=40704, per=33.28%, avg=34357.89, stdev=4361.68, samples=19 00:29:01.068 iops : min= 212, max= 318, avg=268.42, stdev=34.08, samples=19 00:29:01.068 lat (msec) : 10=71.42%, 20=24.76%, 50=1.41%, 100=2.41% 00:29:01.068 cpu : usr=92.91%, sys=5.84%, ctx=9, majf=0, minf=0 00:29:01.068 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:01.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.069 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:01.069 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:01.069 filename0: (groupid=0, jobs=1): err= 0: pid=110281: Wed Nov 20 15:36:45 2024 00:29:01.069 read: IOPS=293, BW=36.6MiB/s (38.4MB/s)(367MiB/10004msec) 00:29:01.069 slat (nsec): min=5595, max=38616, avg=14806.46, stdev=6722.75 00:29:01.069 clat (usec): min=4142, max=52512, avg=10216.98, stdev=2336.72 00:29:01.069 lat (usec): min=4148, max=52524, avg=10231.79, stdev=2335.72 00:29:01.069 clat percentiles (usec): 00:29:01.069 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 7963], 00:29:01.069 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:29:01.069 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:29:01.069 | 99.00th=[13042], 99.50th=[13173], 99.90th=[52167], 99.95th=[52691], 00:29:01.069 | 99.99th=[52691] 00:29:01.069 bw ( KiB/s): min=34304, max=41216, per=36.43%, avg=37605.05, stdev=2188.75, samples=19 00:29:01.069 iops : min= 268, max= 322, avg=293.79, stdev=17.10, samples=19 00:29:01.069 lat (msec) : 10=29.43%, 20=70.46%, 100=0.10% 00:29:01.069 cpu : usr=92.54%, sys=6.07%, ctx=8, majf=0, minf=0 00:29:01.069 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:01.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.069 issued rwts: total=2932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:01.069 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:01.069 00:29:01.069 Run status group 0 (all jobs): 00:29:01.069 READ: bw=101MiB/s (106MB/s), 30.5MiB/s-36.6MiB/s (32.0MB/s-38.4MB/s), io=1009MiB (1058MB), run=10004-10007msec 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.069 15:36:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.069 ************************************ 00:29:01.070 END TEST fio_dif_digest 00:29:01.070 ************************************ 00:29:01.070 15:36:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.070 00:29:01.070 real 0m11.179s 00:29:01.070 user 0m28.764s 00:29:01.070 sys 0m2.024s 00:29:01.070 15:36:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.070 15:36:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.070 15:36:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:01.070 15:36:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.070 rmmod nvme_tcp 00:29:01.070 rmmod nvme_fabrics 00:29:01.070 rmmod nvme_keyring 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 109505 ']' 00:29:01.070 15:36:46 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 109505 00:29:01.070 15:36:46 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 109505 ']' 00:29:01.070 15:36:46 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 109505 00:29:01.071 15:36:46 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:29:01.071 15:36:46 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.071 15:36:46 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109505 00:29:01.071 killing process with pid 109505 00:29:01.071 15:36:46 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:01.071 15:36:46 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:01.071 15:36:46 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109505' 00:29:01.071 15:36:46 nvmf_dif -- common/autotest_common.sh@973 -- # kill 109505 00:29:01.071 15:36:46 nvmf_dif -- common/autotest_common.sh@978 -- # wait 109505 00:29:01.071 15:36:46 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:29:01.071 15:36:46 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:01.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:01.071 Waiting for block devices as requested 00:29:01.071 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:01.071 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:01.071 15:36:47 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:01.337 15:36:47 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:01.337 15:36:47 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:01.337 15:36:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.337 15:36:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:01.337 15:36:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.337 15:36:47 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:29:01.337 00:29:01.337 real 1m2.121s 00:29:01.337 user 3m51.614s 00:29:01.337 sys 0m18.491s 00:29:01.337 15:36:47 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.337 ************************************ 00:29:01.337 15:36:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:01.337 END TEST nvmf_dif 00:29:01.337 ************************************ 00:29:01.337 15:36:47 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:01.337 15:36:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:01.337 15:36:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.337 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:29:01.337 ************************************ 00:29:01.337 START TEST nvmf_abort_qd_sizes 00:29:01.337 ************************************ 00:29:01.337 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:01.597 * Looking for test storage... 00:29:01.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:01.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.597 --rc genhtml_branch_coverage=1 00:29:01.597 --rc genhtml_function_coverage=1 00:29:01.597 --rc genhtml_legend=1 00:29:01.597 --rc geninfo_all_blocks=1 00:29:01.597 --rc geninfo_unexecuted_blocks=1 00:29:01.597 00:29:01.597 ' 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:01.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.597 --rc genhtml_branch_coverage=1 00:29:01.597 --rc genhtml_function_coverage=1 00:29:01.597 --rc genhtml_legend=1 00:29:01.597 --rc geninfo_all_blocks=1 00:29:01.597 --rc geninfo_unexecuted_blocks=1 00:29:01.597 00:29:01.597 ' 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:01.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.597 --rc genhtml_branch_coverage=1 00:29:01.597 --rc genhtml_function_coverage=1 00:29:01.597 --rc genhtml_legend=1 00:29:01.597 --rc geninfo_all_blocks=1 00:29:01.597 --rc geninfo_unexecuted_blocks=1 00:29:01.597 00:29:01.597 ' 00:29:01.597 15:36:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:01.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.598 --rc genhtml_branch_coverage=1 00:29:01.598 --rc genhtml_function_coverage=1 00:29:01.598 --rc genhtml_legend=1 00:29:01.598 --rc geninfo_all_blocks=1 00:29:01.598 --rc geninfo_unexecuted_blocks=1 00:29:01.598 00:29:01.598 ' 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:01.598 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.598 15:36:47 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:01.598 Cannot find device "nvmf_init_br" 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:01.598 Cannot find device "nvmf_init_br2" 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:01.598 Cannot find device "nvmf_tgt_br" 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:29:01.598 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:01.858 Cannot find device "nvmf_tgt_br2" 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:01.858 Cannot find device "nvmf_init_br" 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:01.858 Cannot find device "nvmf_init_br2" 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:01.858 Cannot find device "nvmf_tgt_br" 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:01.858 Cannot find device "nvmf_tgt_br2" 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:01.858 Cannot find device "nvmf_br" 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:01.858 Cannot find device "nvmf_init_if" 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:01.858 Cannot find device "nvmf_init_if2" 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:01.858 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:01.858 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:01.858 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:01.859 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:01.859 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:02.118 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:02.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:29:02.118 00:29:02.118 --- 10.0.0.3 ping statistics --- 00:29:02.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.118 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:02.118 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:02.118 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.137 ms 00:29:02.118 00:29:02.118 --- 10.0.0.4 ping statistics --- 00:29:02.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.118 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:02.118 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:02.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:29:02.118 00:29:02.118 --- 10.0.0.1 ping statistics --- 00:29:02.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.118 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:29:02.378 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:02.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:29:02.378 00:29:02.378 --- 10.0.0.2 ping statistics --- 00:29:02.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.378 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:02.378 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.378 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:29:02.378 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:29:02.378 15:36:48 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:03.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:03.324 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:03.324 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=110932 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 110932 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 110932 ']' 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.324 15:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.583 [2024-11-20 15:36:49.851992] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:29:03.583 [2024-11-20 15:36:49.852065] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.583 [2024-11-20 15:36:50.004663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:03.843 [2024-11-20 15:36:50.071332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.843 [2024-11-20 15:36:50.071386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.843 [2024-11-20 15:36:50.071396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.843 [2024-11-20 15:36:50.071404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.843 [2024-11-20 15:36:50.071411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.843 [2024-11-20 15:36:50.072827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.843 [2024-11-20 15:36:50.073032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.843 [2024-11-20 15:36:50.073115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.843 [2024-11-20 15:36:50.073120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:29:04.412 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:29:04.413 15:36:50 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:04.413 15:36:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:29:04.413 15:36:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:29:04.413 15:36:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:04.413 15:36:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.413 15:36:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.413 15:36:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:04.413 ************************************ 00:29:04.413 START TEST spdk_target_abort 00:29:04.413 ************************************ 00:29:04.413 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:29:04.413 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:04.413 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:29:04.413 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.413 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:04.672 spdk_targetn1 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:04.672 [2024-11-20 15:36:50.934639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:04.672 [2024-11-20 15:36:50.977957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:04.672 15:36:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:07.963 Initializing NVMe Controllers 00:29:07.963 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:29:07.963 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:07.963 Initialization complete. Launching workers. 00:29:07.963 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11232, failed: 0 00:29:07.963 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1274, failed to submit 9958 00:29:07.963 success 737, unsuccessful 537, failed 0 00:29:07.963 15:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:07.963 15:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.253 Initializing NVMe Controllers 00:29:11.253 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:29:11.253 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:11.253 Initialization complete. Launching workers. 00:29:11.253 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6000, failed: 0 00:29:11.253 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1271, failed to submit 4729 00:29:11.253 success 274, unsuccessful 997, failed 0 00:29:11.253 15:36:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:11.253 15:36:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:14.542 Initializing NVMe Controllers 00:29:14.542 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:29:14.542 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:14.542 Initialization complete. Launching workers. 00:29:14.542 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31119, failed: 0 00:29:14.542 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2673, failed to submit 28446 00:29:14.542 success 497, unsuccessful 2176, failed 0 00:29:14.542 15:37:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:14.542 15:37:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.542 15:37:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:14.542 15:37:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.542 15:37:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:14.542 15:37:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.542 15:37:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.109 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.109 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 110932 00:29:15.109 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 110932 ']' 00:29:15.109 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 110932 00:29:15.109 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:29:15.109 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.109 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110932 00:29:15.368 killing process with pid 110932 00:29:15.368 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.368 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.368 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110932' 00:29:15.368 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 110932 00:29:15.368 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 110932 00:29:15.368 00:29:15.368 real 0m10.949s 00:29:15.368 user 0m44.282s 00:29:15.368 sys 0m2.238s 00:29:15.368 ************************************ 00:29:15.368 END TEST spdk_target_abort 00:29:15.368 ************************************ 00:29:15.368 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.368 15:37:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.628 15:37:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:15.628 15:37:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.628 15:37:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.628 15:37:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:15.628 ************************************ 00:29:15.628 START TEST kernel_target_abort 00:29:15.628 ************************************ 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:15.628 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:15.629 15:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:16.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:16.202 Waiting for block devices as requested 00:29:16.202 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:16.202 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:16.481 No valid GPT data, bailing 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:16.481 No valid GPT data, bailing 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:16.481 No valid GPT data, bailing 00:29:16.481 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:29:16.761 15:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:16.761 No valid GPT data, bailing 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:29:16.761 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 --hostid=911e0f85-f038-4b18-8714-1526ee6c3105 -a 10.0.0.1 -t tcp -s 4420 00:29:16.762 00:29:16.762 Discovery Log Number of Records 2, Generation counter 2 00:29:16.762 =====Discovery Log Entry 0====== 00:29:16.762 trtype: tcp 00:29:16.762 adrfam: ipv4 00:29:16.762 subtype: current discovery subsystem 00:29:16.762 treq: not specified, sq flow control disable supported 00:29:16.762 portid: 1 00:29:16.762 trsvcid: 4420 00:29:16.762 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:16.762 traddr: 10.0.0.1 00:29:16.762 eflags: none 00:29:16.762 sectype: none 00:29:16.762 =====Discovery Log Entry 1====== 00:29:16.762 trtype: tcp 00:29:16.762 adrfam: ipv4 00:29:16.762 subtype: nvme subsystem 00:29:16.762 treq: not specified, sq flow control disable supported 00:29:16.762 portid: 1 00:29:16.762 trsvcid: 4420 00:29:16.762 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:16.762 traddr: 10.0.0.1 00:29:16.762 eflags: none 00:29:16.762 sectype: none 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:16.762 15:37:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:20.053 Initializing NVMe Controllers 00:29:20.053 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:20.053 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:20.053 Initialization complete. Launching workers. 00:29:20.053 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34020, failed: 0 00:29:20.053 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34020, failed to submit 0 00:29:20.053 success 0, unsuccessful 34020, failed 0 00:29:20.053 15:37:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:20.053 15:37:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:23.342 Initializing NVMe Controllers 00:29:23.342 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:23.342 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:23.342 Initialization complete. Launching workers. 00:29:23.342 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57412, failed: 0 00:29:23.342 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30386, failed to submit 27026 00:29:23.342 success 0, unsuccessful 30386, failed 0 00:29:23.342 15:37:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:23.342 15:37:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:26.631 Initializing NVMe Controllers 00:29:26.631 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:26.631 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:26.631 Initialization complete. Launching workers. 00:29:26.631 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87972, failed: 0 00:29:26.631 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21976, failed to submit 65996 00:29:26.631 success 0, unsuccessful 21976, failed 0 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:26.631 15:37:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:27.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:29.735 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:29.735 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:29.735 ************************************ 00:29:29.735 END TEST kernel_target_abort 00:29:29.735 ************************************ 00:29:29.735 00:29:29.735 real 0m14.056s 00:29:29.735 user 0m5.880s 00:29:29.735 sys 0m5.143s 00:29:29.735 15:37:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.735 15:37:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.735 15:37:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:29.735 15:37:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:29.736 15:37:15 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.736 rmmod nvme_tcp 00:29:29.736 rmmod nvme_fabrics 00:29:29.736 rmmod nvme_keyring 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 110932 ']' 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 110932 00:29:29.736 Process with pid 110932 is not found 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 110932 ']' 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 110932 00:29:29.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (110932) - No such process 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 110932 is not found' 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:29:29.736 15:37:16 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:30.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:30.305 Waiting for block devices as requested 00:29:30.305 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:30.566 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:30.566 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:30.826 15:37:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.085 15:37:17 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:29:31.085 00:29:31.085 real 0m29.600s 00:29:31.085 user 0m51.615s 00:29:31.085 sys 0m9.663s 00:29:31.085 ************************************ 00:29:31.085 END TEST nvmf_abort_qd_sizes 00:29:31.085 ************************************ 00:29:31.085 15:37:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.085 15:37:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:31.085 15:37:17 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:31.085 15:37:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:31.085 15:37:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.085 15:37:17 -- common/autotest_common.sh@10 -- # set +x 00:29:31.085 ************************************ 00:29:31.085 START TEST keyring_file 00:29:31.085 ************************************ 00:29:31.085 15:37:17 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:31.085 * Looking for test storage... 00:29:31.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:31.085 15:37:17 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:31.085 15:37:17 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:29:31.085 15:37:17 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:31.345 15:37:17 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@345 -- # : 1 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@353 -- # local d=1 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@355 -- # echo 1 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@353 -- # local d=2 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@355 -- # echo 2 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.345 15:37:17 keyring_file -- scripts/common.sh@368 -- # return 0 00:29:31.345 15:37:17 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.345 15:37:17 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.345 --rc genhtml_branch_coverage=1 00:29:31.345 --rc genhtml_function_coverage=1 00:29:31.345 --rc genhtml_legend=1 00:29:31.345 --rc geninfo_all_blocks=1 00:29:31.345 --rc geninfo_unexecuted_blocks=1 00:29:31.345 00:29:31.345 ' 00:29:31.345 15:37:17 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.345 --rc genhtml_branch_coverage=1 00:29:31.345 --rc genhtml_function_coverage=1 00:29:31.345 --rc genhtml_legend=1 00:29:31.345 --rc geninfo_all_blocks=1 00:29:31.345 --rc geninfo_unexecuted_blocks=1 00:29:31.345 00:29:31.345 ' 00:29:31.345 15:37:17 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.345 --rc genhtml_branch_coverage=1 00:29:31.345 --rc genhtml_function_coverage=1 00:29:31.345 --rc genhtml_legend=1 00:29:31.345 --rc geninfo_all_blocks=1 00:29:31.345 --rc geninfo_unexecuted_blocks=1 00:29:31.345 00:29:31.345 ' 00:29:31.346 15:37:17 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:31.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.346 --rc genhtml_branch_coverage=1 00:29:31.346 --rc genhtml_function_coverage=1 00:29:31.346 --rc genhtml_legend=1 00:29:31.346 --rc geninfo_all_blocks=1 00:29:31.346 --rc geninfo_unexecuted_blocks=1 00:29:31.346 00:29:31.346 ' 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:31.346 15:37:17 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.346 15:37:17 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.346 15:37:17 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.346 15:37:17 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.346 15:37:17 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.346 15:37:17 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.346 15:37:17 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.346 15:37:17 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:31.346 15:37:17 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@51 -- # : 0 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:31.346 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yQsOcvwdXA 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yQsOcvwdXA 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yQsOcvwdXA 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.yQsOcvwdXA 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QLkN40qK40 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:31.346 15:37:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QLkN40qK40 00:29:31.346 15:37:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QLkN40qK40 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QLkN40qK40 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@30 -- # tgtpid=111872 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.346 15:37:17 keyring_file -- keyring/file.sh@32 -- # waitforlisten 111872 00:29:31.346 15:37:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111872 ']' 00:29:31.346 15:37:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.346 15:37:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.346 15:37:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.346 15:37:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.346 15:37:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:31.606 [2024-11-20 15:37:17.844142] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:29:31.606 [2024-11-20 15:37:17.844364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111872 ] 00:29:31.606 [2024-11-20 15:37:17.992992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.606 [2024-11-20 15:37:18.033740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:32.544 15:37:18 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:32.544 [2024-11-20 15:37:18.737380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.544 null0 00:29:32.544 [2024-11-20 15:37:18.769315] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:32.544 [2024-11-20 15:37:18.769473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.544 15:37:18 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:32.544 [2024-11-20 15:37:18.801241] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:32.544 2024/11/20 15:37:18 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:29:32.544 request: 00:29:32.544 { 00:29:32.544 "method": "nvmf_subsystem_add_listener", 00:29:32.544 "params": { 00:29:32.544 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.544 "secure_channel": false, 00:29:32.544 "listen_address": { 00:29:32.544 "trtype": "tcp", 00:29:32.544 "traddr": "127.0.0.1", 00:29:32.544 "trsvcid": "4420" 00:29:32.544 } 00:29:32.544 } 00:29:32.544 } 00:29:32.544 Got JSON-RPC error response 00:29:32.544 GoRPCClient: error on JSON-RPC call 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:32.544 15:37:18 keyring_file -- keyring/file.sh@47 -- # bperfpid=111907 00:29:32.544 15:37:18 keyring_file -- keyring/file.sh@49 -- # waitforlisten 111907 /var/tmp/bperf.sock 00:29:32.544 15:37:18 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111907 ']' 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:32.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.544 15:37:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:32.544 [2024-11-20 15:37:18.870828] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:29:32.544 [2024-11-20 15:37:18.871019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111907 ] 00:29:32.544 [2024-11-20 15:37:19.020092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.803 [2024-11-20 15:37:19.079006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.370 15:37:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.370 15:37:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:33.370 15:37:19 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yQsOcvwdXA 00:29:33.370 15:37:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yQsOcvwdXA 00:29:33.629 15:37:19 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QLkN40qK40 00:29:33.629 15:37:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QLkN40qK40 00:29:33.888 15:37:20 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:29:33.888 15:37:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:33.888 15:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.888 15:37:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.888 15:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.146 15:37:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.yQsOcvwdXA == \/\t\m\p\/\t\m\p\.\y\Q\s\O\c\v\w\d\X\A ]] 00:29:34.146 15:37:20 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:29:34.146 15:37:20 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:29:34.146 15:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.146 15:37:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.146 15:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:34.146 15:37:20 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.QLkN40qK40 == \/\t\m\p\/\t\m\p\.\Q\L\k\N\4\0\q\K\4\0 ]] 00:29:34.146 15:37:20 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:29:34.146 15:37:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.146 15:37:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.146 15:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.146 15:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.146 15:37:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.405 15:37:20 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:34.405 15:37:20 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:29:34.405 15:37:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:34.405 15:37:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.405 15:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.405 15:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:34.405 15:37:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.664 15:37:21 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:29:34.665 15:37:21 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:34.665 15:37:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:34.924 [2024-11-20 15:37:21.219509] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:34.924 nvme0n1 00:29:34.924 15:37:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:29:34.924 15:37:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.924 15:37:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.924 15:37:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.924 15:37:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.924 15:37:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:35.184 15:37:21 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:29:35.184 15:37:21 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:29:35.184 15:37:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:35.184 15:37:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.184 15:37:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.184 15:37:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:35.184 15:37:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.443 15:37:21 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:29:35.443 15:37:21 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.443 Running I/O for 1 seconds... 00:29:36.380 15892.00 IOPS, 62.08 MiB/s 00:29:36.380 Latency(us) 00:29:36.380 [2024-11-20T15:37:22.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.380 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:36.380 nvme0n1 : 1.00 15945.94 62.29 0.00 0.00 8011.24 3184.68 14528.46 00:29:36.380 [2024-11-20T15:37:22.862Z] =================================================================================================================== 00:29:36.380 [2024-11-20T15:37:22.862Z] Total : 15945.94 62.29 0.00 0.00 8011.24 3184.68 14528.46 00:29:36.380 { 00:29:36.380 "results": [ 00:29:36.380 { 00:29:36.380 "job": "nvme0n1", 00:29:36.380 "core_mask": "0x2", 00:29:36.380 "workload": "randrw", 00:29:36.380 "percentage": 50, 00:29:36.380 "status": "finished", 00:29:36.380 "queue_depth": 128, 00:29:36.380 "io_size": 4096, 00:29:36.380 "runtime": 1.004707, 00:29:36.380 "iops": 15945.94244889306, 00:29:36.380 "mibps": 62.288837690988515, 00:29:36.380 "io_failed": 0, 00:29:36.380 "io_timeout": 0, 00:29:36.380 "avg_latency_us": 8011.239285385723, 00:29:36.380 "min_latency_us": 3184.681124497992, 00:29:36.380 "max_latency_us": 14528.462650602409 00:29:36.380 } 00:29:36.380 ], 00:29:36.380 "core_count": 1 00:29:36.380 } 00:29:36.380 15:37:22 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:36.380 15:37:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:36.639 15:37:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:29:36.639 15:37:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:36.639 15:37:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.640 15:37:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.640 15:37:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:36.640 15:37:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.899 15:37:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:36.899 15:37:23 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:29:36.899 15:37:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:36.899 15:37:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.899 15:37:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.899 15:37:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.899 15:37:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:37.158 15:37:23 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:29:37.158 15:37:23 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:37.158 15:37:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:37.158 15:37:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:37.158 15:37:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:37.158 15:37:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.158 15:37:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:37.158 15:37:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.158 15:37:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:37.158 15:37:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:37.418 [2024-11-20 15:37:23.734954] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:37.418 [2024-11-20 15:37:23.735313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145a510 (107): Transport endpoint is not connected 00:29:37.418 [2024-11-20 15:37:23.736299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145a510 (9): Bad file descriptor 00:29:37.418 [2024-11-20 15:37:23.737295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:37.418 [2024-11-20 15:37:23.737312] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:37.418 [2024-11-20 15:37:23.737322] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:37.418 [2024-11-20 15:37:23.737332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:37.418 2024/11/20 15:37:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:37.418 request: 00:29:37.418 { 00:29:37.418 "method": "bdev_nvme_attach_controller", 00:29:37.418 "params": { 00:29:37.418 "name": "nvme0", 00:29:37.418 "trtype": "tcp", 00:29:37.418 "traddr": "127.0.0.1", 00:29:37.418 "adrfam": "ipv4", 00:29:37.418 "trsvcid": "4420", 00:29:37.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.418 "prchk_reftag": false, 00:29:37.418 "prchk_guard": false, 00:29:37.418 "hdgst": false, 00:29:37.418 "ddgst": false, 00:29:37.418 "psk": "key1", 00:29:37.418 "allow_unrecognized_csi": false 00:29:37.418 } 00:29:37.418 } 00:29:37.418 Got JSON-RPC error response 00:29:37.418 GoRPCClient: error on JSON-RPC call 00:29:37.418 15:37:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:37.418 15:37:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:37.418 15:37:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:37.418 15:37:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:37.418 15:37:23 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:29:37.418 15:37:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.418 15:37:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:37.418 15:37:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.418 15:37:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.418 15:37:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.677 15:37:23 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:37.677 15:37:23 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:29:37.677 15:37:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:37.677 15:37:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.677 15:37:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.677 15:37:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:37.677 15:37:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.936 15:37:24 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:29:37.936 15:37:24 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:29:37.936 15:37:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:37.936 15:37:24 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:29:37.936 15:37:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:38.195 15:37:24 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:29:38.195 15:37:24 keyring_file -- keyring/file.sh@78 -- # jq length 00:29:38.195 15:37:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.454 15:37:24 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:29:38.454 15:37:24 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.yQsOcvwdXA 00:29:38.454 15:37:24 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.yQsOcvwdXA 00:29:38.454 15:37:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:38.454 15:37:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.yQsOcvwdXA 00:29:38.454 15:37:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:38.454 15:37:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.454 15:37:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:38.454 15:37:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.454 15:37:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yQsOcvwdXA 00:29:38.454 15:37:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yQsOcvwdXA 00:29:38.713 [2024-11-20 15:37:25.037589] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yQsOcvwdXA': 0100660 00:29:38.713 [2024-11-20 15:37:25.037632] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:38.713 2024/11/20 15:37:25 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.yQsOcvwdXA], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:29:38.713 request: 00:29:38.713 { 00:29:38.713 "method": "keyring_file_add_key", 00:29:38.713 "params": { 00:29:38.713 "name": "key0", 00:29:38.713 "path": "/tmp/tmp.yQsOcvwdXA" 00:29:38.713 } 00:29:38.713 } 00:29:38.713 Got JSON-RPC error response 00:29:38.713 GoRPCClient: error on JSON-RPC call 00:29:38.714 15:37:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:38.714 15:37:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:38.714 15:37:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:38.714 15:37:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:38.714 15:37:25 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.yQsOcvwdXA 00:29:38.714 15:37:25 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yQsOcvwdXA 00:29:38.714 15:37:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yQsOcvwdXA 00:29:38.973 15:37:25 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.yQsOcvwdXA 00:29:38.973 15:37:25 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:29:38.973 15:37:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:38.973 15:37:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:38.973 15:37:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.973 15:37:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.973 15:37:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:39.232 15:37:25 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:29:39.232 15:37:25 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.232 15:37:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:39.232 15:37:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.232 15:37:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:39.232 15:37:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:39.232 15:37:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:39.232 15:37:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:39.232 15:37:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.232 15:37:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.232 [2024-11-20 15:37:25.696663] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.yQsOcvwdXA': No such file or directory 00:29:39.232 [2024-11-20 15:37:25.696701] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:39.232 [2024-11-20 15:37:25.696721] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:39.232 [2024-11-20 15:37:25.696732] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:29:39.232 [2024-11-20 15:37:25.696742] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:39.232 [2024-11-20 15:37:25.696751] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:39.232 2024/11/20 15:37:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:29:39.232 request: 00:29:39.232 { 00:29:39.232 "method": "bdev_nvme_attach_controller", 00:29:39.232 "params": { 00:29:39.232 "name": "nvme0", 00:29:39.232 "trtype": "tcp", 00:29:39.232 "traddr": "127.0.0.1", 00:29:39.232 "adrfam": "ipv4", 00:29:39.232 "trsvcid": "4420", 00:29:39.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.232 "prchk_reftag": false, 00:29:39.232 "prchk_guard": false, 00:29:39.232 "hdgst": false, 00:29:39.232 "ddgst": false, 00:29:39.232 "psk": "key0", 00:29:39.232 "allow_unrecognized_csi": false 00:29:39.232 } 00:29:39.232 } 00:29:39.232 Got JSON-RPC error response 00:29:39.232 GoRPCClient: error on JSON-RPC call 00:29:39.491 15:37:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:39.491 15:37:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:39.491 15:37:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:39.491 15:37:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:39.491 15:37:25 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:29:39.491 15:37:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:39.491 15:37:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:39.491 15:37:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:39.491 15:37:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:39.491 15:37:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:39.491 15:37:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:39.491 15:37:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:39.491 15:37:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OpRMJ8Ek9x 00:29:39.491 15:37:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:39.491 15:37:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:39.491 15:37:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:39.491 15:37:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:39.491 15:37:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:39.491 15:37:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:39.491 15:37:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:39.750 15:37:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OpRMJ8Ek9x 00:29:39.750 15:37:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OpRMJ8Ek9x 00:29:39.750 15:37:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.OpRMJ8Ek9x 00:29:39.750 15:37:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OpRMJ8Ek9x 00:29:39.750 15:37:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OpRMJ8Ek9x 00:29:39.750 15:37:26 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.750 15:37:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:40.009 nvme0n1 00:29:40.009 15:37:26 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:29:40.009 15:37:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:40.009 15:37:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.009 15:37:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.010 15:37:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.010 15:37:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.269 15:37:26 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:29:40.269 15:37:26 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:29:40.269 15:37:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:40.528 15:37:26 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:29:40.528 15:37:26 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:29:40.528 15:37:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.528 15:37:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.528 15:37:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.786 15:37:27 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:29:40.786 15:37:27 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:29:40.786 15:37:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.787 15:37:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:40.787 15:37:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.787 15:37:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.787 15:37:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:41.046 15:37:27 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:29:41.046 15:37:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:41.046 15:37:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:41.305 15:37:27 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:29:41.305 15:37:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:41.305 15:37:27 keyring_file -- keyring/file.sh@105 -- # jq length 00:29:41.305 15:37:27 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:29:41.305 15:37:27 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OpRMJ8Ek9x 00:29:41.305 15:37:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OpRMJ8Ek9x 00:29:41.564 15:37:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QLkN40qK40 00:29:41.564 15:37:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QLkN40qK40 00:29:41.823 15:37:28 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:41.823 15:37:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:42.083 nvme0n1 00:29:42.083 15:37:28 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:29:42.083 15:37:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:42.343 15:37:28 keyring_file -- keyring/file.sh@113 -- # config='{ 00:29:42.343 "subsystems": [ 00:29:42.343 { 00:29:42.343 "subsystem": "keyring", 00:29:42.343 "config": [ 00:29:42.343 { 00:29:42.343 "method": "keyring_file_add_key", 00:29:42.343 "params": { 00:29:42.343 "name": "key0", 00:29:42.343 "path": "/tmp/tmp.OpRMJ8Ek9x" 00:29:42.343 } 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "method": "keyring_file_add_key", 00:29:42.343 "params": { 00:29:42.343 "name": "key1", 00:29:42.343 "path": "/tmp/tmp.QLkN40qK40" 00:29:42.343 } 00:29:42.343 } 00:29:42.343 ] 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "subsystem": "iobuf", 00:29:42.343 "config": [ 00:29:42.343 { 00:29:42.343 "method": "iobuf_set_options", 00:29:42.343 "params": { 00:29:42.343 "enable_numa": false, 00:29:42.343 "large_bufsize": 135168, 00:29:42.343 "large_pool_count": 1024, 00:29:42.343 "small_bufsize": 8192, 00:29:42.343 "small_pool_count": 8192 00:29:42.343 } 00:29:42.343 } 00:29:42.343 ] 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "subsystem": "sock", 00:29:42.343 "config": [ 00:29:42.343 { 00:29:42.343 "method": "sock_set_default_impl", 00:29:42.343 "params": { 00:29:42.343 "impl_name": "posix" 00:29:42.343 } 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "method": "sock_impl_set_options", 00:29:42.343 "params": { 00:29:42.343 "enable_ktls": false, 00:29:42.343 "enable_placement_id": 0, 00:29:42.343 "enable_quickack": false, 00:29:42.343 "enable_recv_pipe": true, 00:29:42.343 "enable_zerocopy_send_client": false, 00:29:42.343 "enable_zerocopy_send_server": true, 00:29:42.343 "impl_name": "ssl", 00:29:42.343 "recv_buf_size": 4096, 00:29:42.343 "send_buf_size": 4096, 00:29:42.343 "tls_version": 0, 00:29:42.343 "zerocopy_threshold": 0 00:29:42.343 } 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "method": "sock_impl_set_options", 00:29:42.343 "params": { 00:29:42.343 "enable_ktls": false, 00:29:42.343 "enable_placement_id": 0, 00:29:42.343 "enable_quickack": false, 00:29:42.343 "enable_recv_pipe": true, 00:29:42.343 "enable_zerocopy_send_client": false, 00:29:42.343 "enable_zerocopy_send_server": true, 00:29:42.343 "impl_name": "posix", 00:29:42.343 "recv_buf_size": 2097152, 00:29:42.343 "send_buf_size": 2097152, 00:29:42.343 "tls_version": 0, 00:29:42.343 "zerocopy_threshold": 0 00:29:42.343 } 00:29:42.343 } 00:29:42.343 ] 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "subsystem": "vmd", 00:29:42.343 "config": [] 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "subsystem": "accel", 00:29:42.343 "config": [ 00:29:42.343 { 00:29:42.343 "method": "accel_set_options", 00:29:42.343 "params": { 00:29:42.343 "buf_count": 2048, 00:29:42.343 "large_cache_size": 16, 00:29:42.343 "sequence_count": 2048, 00:29:42.343 "small_cache_size": 128, 00:29:42.343 "task_count": 2048 00:29:42.343 } 00:29:42.343 } 00:29:42.343 ] 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "subsystem": "bdev", 00:29:42.343 "config": [ 00:29:42.343 { 00:29:42.343 "method": "bdev_set_options", 00:29:42.343 "params": { 00:29:42.343 "bdev_auto_examine": true, 00:29:42.343 "bdev_io_cache_size": 256, 00:29:42.343 "bdev_io_pool_size": 65535, 00:29:42.343 "iobuf_large_cache_size": 16, 00:29:42.343 "iobuf_small_cache_size": 128 00:29:42.343 } 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "method": "bdev_raid_set_options", 00:29:42.343 "params": { 00:29:42.343 "process_max_bandwidth_mb_sec": 0, 00:29:42.343 "process_window_size_kb": 1024 00:29:42.343 } 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "method": "bdev_iscsi_set_options", 00:29:42.343 "params": { 00:29:42.343 "timeout_sec": 30 00:29:42.343 } 00:29:42.343 }, 00:29:42.343 { 00:29:42.343 "method": "bdev_nvme_set_options", 00:29:42.343 "params": { 00:29:42.343 "action_on_timeout": "none", 00:29:42.343 "allow_accel_sequence": false, 00:29:42.343 "arbitration_burst": 0, 00:29:42.343 "bdev_retry_count": 3, 00:29:42.343 "ctrlr_loss_timeout_sec": 0, 00:29:42.343 "delay_cmd_submit": true, 00:29:42.343 "dhchap_dhgroups": [ 00:29:42.343 "null", 00:29:42.343 "ffdhe2048", 00:29:42.343 "ffdhe3072", 00:29:42.343 "ffdhe4096", 00:29:42.343 "ffdhe6144", 00:29:42.343 "ffdhe8192" 00:29:42.343 ], 00:29:42.343 "dhchap_digests": [ 00:29:42.343 "sha256", 00:29:42.343 "sha384", 00:29:42.343 "sha512" 00:29:42.343 ], 00:29:42.343 "disable_auto_failback": false, 00:29:42.343 "fast_io_fail_timeout_sec": 0, 00:29:42.343 "generate_uuids": false, 00:29:42.343 "high_priority_weight": 0, 00:29:42.344 "io_path_stat": false, 00:29:42.344 "io_queue_requests": 512, 00:29:42.344 "keep_alive_timeout_ms": 10000, 00:29:42.344 "low_priority_weight": 0, 00:29:42.344 "medium_priority_weight": 0, 00:29:42.344 "nvme_adminq_poll_period_us": 10000, 00:29:42.344 "nvme_error_stat": false, 00:29:42.344 "nvme_ioq_poll_period_us": 0, 00:29:42.344 "rdma_cm_event_timeout_ms": 0, 00:29:42.344 "rdma_max_cq_size": 0, 00:29:42.344 "rdma_srq_size": 0, 00:29:42.344 "reconnect_delay_sec": 0, 00:29:42.344 "timeout_admin_us": 0, 00:29:42.344 "timeout_us": 0, 00:29:42.344 "transport_ack_timeout": 0, 00:29:42.344 "transport_retry_count": 4, 00:29:42.344 "transport_tos": 0 00:29:42.344 } 00:29:42.344 }, 00:29:42.344 { 00:29:42.344 "method": "bdev_nvme_attach_controller", 00:29:42.344 "params": { 00:29:42.344 "adrfam": "IPv4", 00:29:42.344 "ctrlr_loss_timeout_sec": 0, 00:29:42.344 "ddgst": false, 00:29:42.344 "fast_io_fail_timeout_sec": 0, 00:29:42.344 "hdgst": false, 00:29:42.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.344 "multipath": "multipath", 00:29:42.344 "name": "nvme0", 00:29:42.344 "prchk_guard": false, 00:29:42.344 "prchk_reftag": false, 00:29:42.344 "psk": "key0", 00:29:42.344 "reconnect_delay_sec": 0, 00:29:42.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.344 "traddr": "127.0.0.1", 00:29:42.344 "trsvcid": "4420", 00:29:42.344 "trtype": "TCP" 00:29:42.344 } 00:29:42.344 }, 00:29:42.344 { 00:29:42.344 "method": "bdev_nvme_set_hotplug", 00:29:42.344 "params": { 00:29:42.344 "enable": false, 00:29:42.344 "period_us": 100000 00:29:42.344 } 00:29:42.344 }, 00:29:42.344 { 00:29:42.344 "method": "bdev_wait_for_examine" 00:29:42.344 } 00:29:42.344 ] 00:29:42.344 }, 00:29:42.344 { 00:29:42.344 "subsystem": "nbd", 00:29:42.344 "config": [] 00:29:42.344 } 00:29:42.344 ] 00:29:42.344 }' 00:29:42.344 15:37:28 keyring_file -- keyring/file.sh@115 -- # killprocess 111907 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111907 ']' 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111907 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111907 00:29:42.344 killing process with pid 111907 00:29:42.344 Received shutdown signal, test time was about 1.000000 seconds 00:29:42.344 00:29:42.344 Latency(us) 00:29:42.344 [2024-11-20T15:37:28.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.344 [2024-11-20T15:37:28.826Z] =================================================================================================================== 00:29:42.344 [2024-11-20T15:37:28.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111907' 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@973 -- # kill 111907 00:29:42.344 15:37:28 keyring_file -- common/autotest_common.sh@978 -- # wait 111907 00:29:42.604 15:37:29 keyring_file -- keyring/file.sh@118 -- # bperfpid=112363 00:29:42.604 15:37:29 keyring_file -- keyring/file.sh@120 -- # waitforlisten 112363 /var/tmp/bperf.sock 00:29:42.604 15:37:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112363 ']' 00:29:42.604 15:37:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.604 15:37:29 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:42.604 15:37:29 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:29:42.604 "subsystems": [ 00:29:42.604 { 00:29:42.604 "subsystem": "keyring", 00:29:42.604 "config": [ 00:29:42.604 { 00:29:42.604 "method": "keyring_file_add_key", 00:29:42.604 "params": { 00:29:42.604 "name": "key0", 00:29:42.604 "path": "/tmp/tmp.OpRMJ8Ek9x" 00:29:42.604 } 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "method": "keyring_file_add_key", 00:29:42.604 "params": { 00:29:42.604 "name": "key1", 00:29:42.604 "path": "/tmp/tmp.QLkN40qK40" 00:29:42.604 } 00:29:42.604 } 00:29:42.604 ] 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "subsystem": "iobuf", 00:29:42.604 "config": [ 00:29:42.604 { 00:29:42.604 "method": "iobuf_set_options", 00:29:42.604 "params": { 00:29:42.604 "enable_numa": false, 00:29:42.604 "large_bufsize": 135168, 00:29:42.604 "large_pool_count": 1024, 00:29:42.604 "small_bufsize": 8192, 00:29:42.604 "small_pool_count": 8192 00:29:42.604 } 00:29:42.604 } 00:29:42.604 ] 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "subsystem": "sock", 00:29:42.604 "config": [ 00:29:42.604 { 00:29:42.604 "method": "sock_set_default_impl", 00:29:42.604 "params": { 00:29:42.604 "impl_name": "posix" 00:29:42.604 } 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "method": "sock_impl_set_options", 00:29:42.604 "params": { 00:29:42.604 "enable_ktls": false, 00:29:42.604 "enable_placement_id": 0, 00:29:42.604 "enable_quickack": false, 00:29:42.604 "enable_recv_pipe": true, 00:29:42.604 "enable_zerocopy_send_client": false, 00:29:42.604 "enable_zerocopy_send_server": true, 00:29:42.604 "impl_name": "ssl", 00:29:42.604 "recv_buf_size": 4096, 00:29:42.604 "send_buf_size": 4096, 00:29:42.604 "tls_version": 0, 00:29:42.604 "zerocopy_threshold": 0 00:29:42.604 } 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "method": "sock_impl_set_options", 00:29:42.604 "params": { 00:29:42.604 "enable_ktls": false, 00:29:42.604 "enable_placement_id": 0, 00:29:42.604 "enable_quickack": false, 00:29:42.604 "enable_recv_pipe": true, 00:29:42.604 "enable_zerocopy_send_client": false, 00:29:42.604 "enable_zerocopy_send_server": true, 00:29:42.604 "impl_name": "posix", 00:29:42.604 "recv_buf_size": 2097152, 00:29:42.604 "send_buf_size": 2097152, 00:29:42.604 "tls_version": 0, 00:29:42.604 "zerocopy_threshold": 0 00:29:42.604 } 00:29:42.604 } 00:29:42.604 ] 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "subsystem": "vmd", 00:29:42.604 "config": [] 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "subsystem": "accel", 00:29:42.604 "config": [ 00:29:42.604 { 00:29:42.604 "method": "accel_set_options", 00:29:42.604 "params": { 00:29:42.604 "buf_count": 2048, 00:29:42.604 "large_cache_size": 16, 00:29:42.604 "sequence_count": 2048, 00:29:42.604 "small_cache_size": 128, 00:29:42.604 "task_count": 2048 00:29:42.604 } 00:29:42.604 } 00:29:42.604 ] 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "subsystem": "bdev", 00:29:42.604 "config": [ 00:29:42.604 { 00:29:42.604 "method": "bdev_set_options", 00:29:42.604 "params": { 00:29:42.604 "bdev_auto_examine": true, 00:29:42.604 "bdev_io_cache_size": 256, 00:29:42.604 "bdev_io_pool_size": 65535, 00:29:42.604 "iobuf_large_cache_size": 16, 00:29:42.604 "iobuf_small_cache_size": 128 00:29:42.604 } 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "method": "bdev_raid_set_options", 00:29:42.604 "params": { 00:29:42.604 "process_max_bandwidth_mb_sec": 0, 00:29:42.604 "process_window_size_kb": 1024 00:29:42.604 } 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "method": "bdev_iscsi_set_options", 00:29:42.604 "params": { 00:29:42.604 "timeout_sec": 30 00:29:42.604 } 00:29:42.604 }, 00:29:42.604 { 00:29:42.604 "method": "bdev_nvme_set_options", 00:29:42.604 "params": { 00:29:42.604 "action_on_timeout": "none", 00:29:42.604 "allow_accel_sequence": false, 00:29:42.604 "arbitration_burst": 0, 00:29:42.604 "bdev_retry_count": 3, 00:29:42.604 "ctrlr_loss_timeout_sec": 0, 00:29:42.604 "delay_cmd_submit": true, 00:29:42.604 "dhchap_dhgroups": [ 00:29:42.604 "null", 00:29:42.604 "ffdhe2048", 00:29:42.604 "ffdhe3072", 00:29:42.604 "ffdhe4096", 00:29:42.604 "ffdhe6144", 00:29:42.604 "ffdhe8192" 00:29:42.604 ], 00:29:42.604 "dhchap_digests": [ 00:29:42.604 "sha256", 00:29:42.604 "sha384", 00:29:42.604 "sha512" 00:29:42.604 ], 00:29:42.604 "disable_auto_failback": false, 00:29:42.604 "fast_io_fail_timeout_sec": 0, 00:29:42.604 "generate_uuids": false, 00:29:42.604 "high_priority_weight": 0, 00:29:42.604 "io_path_stat": false, 00:29:42.604 "io_queue_requests": 512, 00:29:42.604 "keep_alive_timeout_ms": 10000, 00:29:42.604 "low_priority_weight": 0, 00:29:42.604 "medium_pr 15:37:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.604 15:37:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.604 iority_weight": 0, 00:29:42.604 "nvme_adminq_poll_period_us": 10000, 00:29:42.604 "nvme_error_stat": false, 00:29:42.604 "nvme_ioq_poll_period_us": 0, 00:29:42.604 "rdma_cm_event_timeout_ms": 0, 00:29:42.604 "rdma_max_cq_size": 0, 00:29:42.604 "rdma_srq_size": 0, 00:29:42.604 "reconnect_delay_sec": 0, 00:29:42.605 "timeout_admin_us": 0, 00:29:42.605 "timeout_us": 0, 00:29:42.605 "transport_ack_timeout": 0, 00:29:42.605 "transport_retry_count": 4, 00:29:42.605 "transport_tos": 0 00:29:42.605 } 00:29:42.605 }, 00:29:42.605 { 00:29:42.605 "method": "bdev_nvme_attach_controller", 00:29:42.605 "params": { 00:29:42.605 "adrfam": "IPv4", 00:29:42.605 "ctrlr_loss_timeout_sec": 0, 00:29:42.605 "ddgst": false, 00:29:42.605 "fast_io_fail_timeout_sec": 0, 00:29:42.605 "hdgst": false, 00:29:42.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.605 "multipath": "multipath", 00:29:42.605 "name": "nvme0", 00:29:42.605 "prchk_guard": false, 00:29:42.605 "prchk_reftag": false, 00:29:42.605 "psk": "key0", 00:29:42.605 "reconnect_delay_sec": 0, 00:29:42.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.605 "traddr": "127.0.0.1", 00:29:42.605 "trsvcid": "4420", 00:29:42.605 "trtype": "TCP" 00:29:42.605 } 00:29:42.605 }, 00:29:42.605 { 00:29:42.605 "method": "bdev_nvme_set_hotplug", 00:29:42.605 "params": { 00:29:42.605 "enable": false, 00:29:42.605 "period_us": 100000 00:29:42.605 } 00:29:42.605 }, 00:29:42.605 { 00:29:42.605 "method": "bdev_wait_for_examine" 00:29:42.605 } 00:29:42.605 ] 00:29:42.605 }, 00:29:42.605 { 00:29:42.605 "subsystem": "nbd", 00:29:42.605 "config": [] 00:29:42.605 } 00:29:42.605 ] 00:29:42.605 }' 00:29:42.605 15:37:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.605 15:37:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:42.875 [2024-11-20 15:37:29.119526] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:29:42.875 [2024-11-20 15:37:29.119603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112363 ] 00:29:42.875 [2024-11-20 15:37:29.272778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.875 [2024-11-20 15:37:29.331207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.134 [2024-11-20 15:37:29.550324] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:43.703 15:37:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.703 15:37:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:43.703 15:37:29 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:29:43.703 15:37:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.703 15:37:29 keyring_file -- keyring/file.sh@121 -- # jq length 00:29:43.962 15:37:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:43.962 15:37:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:43.962 15:37:30 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:29:43.962 15:37:30 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:43.962 15:37:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.221 15:37:30 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:29:44.221 15:37:30 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:29:44.221 15:37:30 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:29:44.221 15:37:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:44.480 15:37:30 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:29:44.480 15:37:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:44.480 15:37:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.OpRMJ8Ek9x /tmp/tmp.QLkN40qK40 00:29:44.480 15:37:30 keyring_file -- keyring/file.sh@20 -- # killprocess 112363 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112363 ']' 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112363 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112363 00:29:44.480 killing process with pid 112363 00:29:44.480 Received shutdown signal, test time was about 1.000000 seconds 00:29:44.480 00:29:44.480 Latency(us) 00:29:44.480 [2024-11-20T15:37:30.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.480 [2024-11-20T15:37:30.962Z] =================================================================================================================== 00:29:44.480 [2024-11-20T15:37:30.962Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112363' 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@973 -- # kill 112363 00:29:44.480 15:37:30 keyring_file -- common/autotest_common.sh@978 -- # wait 112363 00:29:44.740 15:37:31 keyring_file -- keyring/file.sh@21 -- # killprocess 111872 00:29:44.740 15:37:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111872 ']' 00:29:44.740 15:37:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111872 00:29:44.740 15:37:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:44.740 15:37:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.740 15:37:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111872 00:29:45.046 killing process with pid 111872 00:29:45.046 15:37:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:45.046 15:37:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:45.046 15:37:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111872' 00:29:45.046 15:37:31 keyring_file -- common/autotest_common.sh@973 -- # kill 111872 00:29:45.046 15:37:31 keyring_file -- common/autotest_common.sh@978 -- # wait 111872 00:29:45.331 00:29:45.331 real 0m14.154s 00:29:45.331 user 0m33.008s 00:29:45.331 sys 0m4.103s 00:29:45.331 15:37:31 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.331 ************************************ 00:29:45.331 15:37:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:45.331 END TEST keyring_file 00:29:45.331 ************************************ 00:29:45.331 15:37:31 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:29:45.331 15:37:31 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:45.331 15:37:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:45.331 15:37:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.331 15:37:31 -- common/autotest_common.sh@10 -- # set +x 00:29:45.331 ************************************ 00:29:45.331 START TEST keyring_linux 00:29:45.331 ************************************ 00:29:45.331 15:37:31 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:45.331 Joined session keyring: 127754163 00:29:45.331 * Looking for test storage... 00:29:45.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:45.331 15:37:31 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:45.331 15:37:31 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:29:45.331 15:37:31 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:45.592 15:37:31 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@345 -- # : 1 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@368 -- # return 0 00:29:45.592 15:37:31 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.592 15:37:31 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.592 --rc genhtml_branch_coverage=1 00:29:45.592 --rc genhtml_function_coverage=1 00:29:45.592 --rc genhtml_legend=1 00:29:45.592 --rc geninfo_all_blocks=1 00:29:45.592 --rc geninfo_unexecuted_blocks=1 00:29:45.592 00:29:45.592 ' 00:29:45.592 15:37:31 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.592 --rc genhtml_branch_coverage=1 00:29:45.592 --rc genhtml_function_coverage=1 00:29:45.592 --rc genhtml_legend=1 00:29:45.592 --rc geninfo_all_blocks=1 00:29:45.592 --rc geninfo_unexecuted_blocks=1 00:29:45.592 00:29:45.592 ' 00:29:45.592 15:37:31 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.592 --rc genhtml_branch_coverage=1 00:29:45.592 --rc genhtml_function_coverage=1 00:29:45.592 --rc genhtml_legend=1 00:29:45.592 --rc geninfo_all_blocks=1 00:29:45.592 --rc geninfo_unexecuted_blocks=1 00:29:45.592 00:29:45.592 ' 00:29:45.592 15:37:31 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.592 --rc genhtml_branch_coverage=1 00:29:45.592 --rc genhtml_function_coverage=1 00:29:45.592 --rc genhtml_legend=1 00:29:45.592 --rc geninfo_all_blocks=1 00:29:45.592 --rc geninfo_unexecuted_blocks=1 00:29:45.592 00:29:45.592 ' 00:29:45.592 15:37:31 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:911e0f85-f038-4b18-8714-1526ee6c3105 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=911e0f85-f038-4b18-8714-1526ee6c3105 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.592 15:37:31 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.592 15:37:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.592 15:37:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.592 15:37:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.592 15:37:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:45.592 15:37:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.592 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:45.592 15:37:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:45.592 15:37:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:45.592 15:37:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:45.592 15:37:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:45.592 15:37:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:45.592 15:37:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:45.592 15:37:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:45.592 /tmp/:spdk-test:key0 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:45.592 15:37:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:45.592 15:37:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:45.593 15:37:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:45.593 15:37:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:45.593 15:37:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:45.593 15:37:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:45.593 15:37:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:45.593 15:37:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:45.593 15:37:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:45.593 15:37:31 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:45.593 15:37:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:45.593 15:37:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:45.593 15:37:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:45.593 /tmp/:spdk-test:key1 00:29:45.593 15:37:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:45.593 15:37:32 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:45.593 15:37:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=112525 00:29:45.593 15:37:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 112525 00:29:45.593 15:37:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 112525 ']' 00:29:45.593 15:37:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.593 15:37:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.593 15:37:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.593 15:37:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.593 15:37:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:45.593 [2024-11-20 15:37:32.069723] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:29:45.593 [2024-11-20 15:37:32.069804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112525 ] 00:29:45.852 [2024-11-20 15:37:32.218135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.852 [2024-11-20 15:37:32.260307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.792 15:37:32 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.792 15:37:32 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:46.792 15:37:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:46.792 15:37:32 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.792 15:37:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:46.792 [2024-11-20 15:37:32.963456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.792 null0 00:29:46.792 [2024-11-20 15:37:32.995368] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:46.792 [2024-11-20 15:37:32.995527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:46.792 15:37:33 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.792 15:37:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:46.792 624274353 00:29:46.792 15:37:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:46.792 538112774 00:29:46.792 15:37:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=112557 00:29:46.792 15:37:33 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:46.792 15:37:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 112557 /var/tmp/bperf.sock 00:29:46.792 15:37:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 112557 ']' 00:29:46.792 15:37:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:46.792 15:37:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:46.792 15:37:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:46.792 15:37:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.792 15:37:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:46.792 [2024-11-20 15:37:33.081520] Starting SPDK v25.01-pre git sha1 1981e6eec / DPDK 24.03.0 initialization... 00:29:46.792 [2024-11-20 15:37:33.081585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112557 ] 00:29:46.792 [2024-11-20 15:37:33.233113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.051 [2024-11-20 15:37:33.299092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.619 15:37:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.619 15:37:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:47.619 15:37:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:47.619 15:37:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:47.878 15:37:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:47.878 15:37:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:48.137 15:37:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:48.138 15:37:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:48.397 [2024-11-20 15:37:34.689929] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:48.397 nvme0n1 00:29:48.397 15:37:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:48.397 15:37:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:48.397 15:37:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:48.397 15:37:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:48.397 15:37:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.397 15:37:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:48.657 15:37:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:48.657 15:37:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:48.657 15:37:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:48.657 15:37:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:48.657 15:37:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.657 15:37:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.657 15:37:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:48.916 15:37:35 keyring_linux -- keyring/linux.sh@25 -- # sn=624274353 00:29:48.916 15:37:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:48.916 15:37:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:48.916 15:37:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 624274353 == \6\2\4\2\7\4\3\5\3 ]] 00:29:48.916 15:37:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 624274353 00:29:48.916 15:37:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:48.916 15:37:35 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:48.916 Running I/O for 1 seconds... 00:29:50.111 16614.00 IOPS, 64.90 MiB/s 00:29:50.111 Latency(us) 00:29:50.111 [2024-11-20T15:37:36.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.111 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:50.111 nvme0n1 : 1.01 16610.68 64.89 0.00 0.00 7673.08 6527.28 17055.15 00:29:50.111 [2024-11-20T15:37:36.593Z] =================================================================================================================== 00:29:50.111 [2024-11-20T15:37:36.593Z] Total : 16610.68 64.89 0.00 0.00 7673.08 6527.28 17055.15 00:29:50.111 { 00:29:50.111 "results": [ 00:29:50.111 { 00:29:50.111 "job": "nvme0n1", 00:29:50.111 "core_mask": "0x2", 00:29:50.111 "workload": "randread", 00:29:50.111 "status": "finished", 00:29:50.111 "queue_depth": 128, 00:29:50.111 "io_size": 4096, 00:29:50.111 "runtime": 1.007906, 00:29:50.111 "iops": 16610.675995578953, 00:29:50.111 "mibps": 64.88545310773029, 00:29:50.111 "io_failed": 0, 00:29:50.111 "io_timeout": 0, 00:29:50.111 "avg_latency_us": 7673.083214137159, 00:29:50.111 "min_latency_us": 6527.28032128514, 00:29:50.111 "max_latency_us": 17055.151807228915 00:29:50.111 } 00:29:50.111 ], 00:29:50.111 "core_count": 1 00:29:50.111 } 00:29:50.111 15:37:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:50.111 15:37:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:50.111 15:37:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:50.111 15:37:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:50.111 15:37:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:50.111 15:37:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:50.111 15:37:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.111 15:37:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:50.370 15:37:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:50.371 15:37:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:50.371 15:37:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:50.371 15:37:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:50.371 15:37:36 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:29:50.371 15:37:36 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:50.371 15:37:36 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:50.371 15:37:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.371 15:37:36 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:50.371 15:37:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.371 15:37:36 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:50.371 15:37:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:50.631 [2024-11-20 15:37:37.010738] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:50.631 [2024-11-20 15:37:37.011173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa85490 (107): Transport endpoint is not connected 00:29:50.631 [2024-11-20 15:37:37.012159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa85490 (9): Bad file descriptor 00:29:50.631 [2024-11-20 15:37:37.013156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:50.631 [2024-11-20 15:37:37.013174] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:50.631 [2024-11-20 15:37:37.013183] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:50.631 [2024-11-20 15:37:37.013194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:50.631 2024/11/20 15:37:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:50.631 request: 00:29:50.631 { 00:29:50.631 "method": "bdev_nvme_attach_controller", 00:29:50.631 "params": { 00:29:50.631 "name": "nvme0", 00:29:50.631 "trtype": "tcp", 00:29:50.631 "traddr": "127.0.0.1", 00:29:50.631 "adrfam": "ipv4", 00:29:50.631 "trsvcid": "4420", 00:29:50.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.631 "prchk_reftag": false, 00:29:50.631 "prchk_guard": false, 00:29:50.631 "hdgst": false, 00:29:50.631 "ddgst": false, 00:29:50.631 "psk": ":spdk-test:key1", 00:29:50.631 "allow_unrecognized_csi": false 00:29:50.631 } 00:29:50.631 } 00:29:50.631 Got JSON-RPC error response 00:29:50.631 GoRPCClient: error on JSON-RPC call 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@33 -- # sn=624274353 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 624274353 00:29:50.631 1 links removed 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@33 -- # sn=538112774 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 538112774 00:29:50.631 1 links removed 00:29:50.631 15:37:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 112557 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 112557 ']' 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 112557 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.631 15:37:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112557 00:29:50.890 15:37:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.890 15:37:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.890 15:37:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112557' 00:29:50.890 killing process with pid 112557 00:29:50.890 15:37:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 112557 00:29:50.891 Received shutdown signal, test time was about 1.000000 seconds 00:29:50.891 00:29:50.891 Latency(us) 00:29:50.891 [2024-11-20T15:37:37.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.891 [2024-11-20T15:37:37.373Z] =================================================================================================================== 00:29:50.891 [2024-11-20T15:37:37.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.891 15:37:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 112557 00:29:50.891 15:37:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 112525 00:29:50.891 15:37:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 112525 ']' 00:29:50.891 15:37:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 112525 00:29:50.891 15:37:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:51.150 15:37:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.150 15:37:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112525 00:29:51.150 15:37:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:51.150 15:37:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:51.150 killing process with pid 112525 00:29:51.150 15:37:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112525' 00:29:51.150 15:37:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 112525 00:29:51.150 15:37:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 112525 00:29:51.410 00:29:51.410 real 0m6.096s 00:29:51.410 user 0m10.814s 00:29:51.410 sys 0m2.060s 00:29:51.410 15:37:37 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.410 15:37:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 ************************************ 00:29:51.410 END TEST keyring_linux 00:29:51.410 ************************************ 00:29:51.410 15:37:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:51.410 15:37:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:51.410 15:37:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:51.410 15:37:37 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:51.410 15:37:37 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:51.410 15:37:37 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:51.410 15:37:37 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:51.410 15:37:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.410 15:37:37 -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 15:37:37 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:51.410 15:37:37 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:51.410 15:37:37 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:51.410 15:37:37 -- common/autotest_common.sh@10 -- # set +x 00:29:54.702 INFO: APP EXITING 00:29:54.702 INFO: killing all VMs 00:29:54.702 INFO: killing vhost app 00:29:54.702 INFO: EXIT DONE 00:29:54.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:55.222 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:55.222 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:56.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:56.161 Cleaning 00:29:56.161 Removing: /var/run/dpdk/spdk0/config 00:29:56.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:56.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:56.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:56.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:56.161 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:56.161 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:56.161 Removing: /var/run/dpdk/spdk1/config 00:29:56.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:56.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:56.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:56.162 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:56.162 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:56.162 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:56.162 Removing: /var/run/dpdk/spdk2/config 00:29:56.162 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:56.162 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:56.162 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:56.162 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:56.162 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:56.162 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:56.162 Removing: /var/run/dpdk/spdk3/config 00:29:56.162 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:56.162 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:56.162 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:56.162 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:56.162 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:56.162 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:56.162 Removing: /var/run/dpdk/spdk4/config 00:29:56.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:56.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:56.422 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:56.422 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:56.422 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:56.422 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:56.422 Removing: /dev/shm/nvmf_trace.0 00:29:56.422 Removing: /dev/shm/spdk_tgt_trace.pid58653 00:29:56.422 Removing: /var/run/dpdk/spdk0 00:29:56.422 Removing: /var/run/dpdk/spdk1 00:29:56.422 Removing: /var/run/dpdk/spdk2 00:29:56.422 Removing: /var/run/dpdk/spdk3 00:29:56.422 Removing: /var/run/dpdk/spdk4 00:29:56.422 Removing: /var/run/dpdk/spdk_pid102069 00:29:56.422 Removing: /var/run/dpdk/spdk_pid102109 00:29:56.422 Removing: /var/run/dpdk/spdk_pid102476 00:29:56.422 Removing: /var/run/dpdk/spdk_pid102526 00:29:56.422 Removing: /var/run/dpdk/spdk_pid102930 00:29:56.422 Removing: /var/run/dpdk/spdk_pid103507 00:29:56.422 Removing: /var/run/dpdk/spdk_pid103930 00:29:56.422 Removing: /var/run/dpdk/spdk_pid105012 00:29:56.422 Removing: /var/run/dpdk/spdk_pid106092 00:29:56.422 Removing: /var/run/dpdk/spdk_pid106209 00:29:56.422 Removing: /var/run/dpdk/spdk_pid106267 00:29:56.422 Removing: /var/run/dpdk/spdk_pid107895 00:29:56.422 Removing: /var/run/dpdk/spdk_pid108230 00:29:56.422 Removing: /var/run/dpdk/spdk_pid108572 00:29:56.422 Removing: /var/run/dpdk/spdk_pid109157 00:29:56.422 Removing: /var/run/dpdk/spdk_pid109162 00:29:56.422 Removing: /var/run/dpdk/spdk_pid109580 00:29:56.422 Removing: /var/run/dpdk/spdk_pid109744 00:29:56.422 Removing: /var/run/dpdk/spdk_pid109902 00:29:56.422 Removing: /var/run/dpdk/spdk_pid109999 00:29:56.422 Removing: /var/run/dpdk/spdk_pid110158 00:29:56.422 Removing: /var/run/dpdk/spdk_pid110269 00:29:56.422 Removing: /var/run/dpdk/spdk_pid111007 00:29:56.422 Removing: /var/run/dpdk/spdk_pid111037 00:29:56.422 Removing: /var/run/dpdk/spdk_pid111072 00:29:56.422 Removing: /var/run/dpdk/spdk_pid111331 00:29:56.422 Removing: /var/run/dpdk/spdk_pid111362 00:29:56.422 Removing: /var/run/dpdk/spdk_pid111397 00:29:56.422 Removing: /var/run/dpdk/spdk_pid111872 00:29:56.422 Removing: /var/run/dpdk/spdk_pid111907 00:29:56.422 Removing: /var/run/dpdk/spdk_pid112363 00:29:56.422 Removing: /var/run/dpdk/spdk_pid112525 00:29:56.422 Removing: /var/run/dpdk/spdk_pid112557 00:29:56.422 Removing: /var/run/dpdk/spdk_pid58496 00:29:56.422 Removing: /var/run/dpdk/spdk_pid58653 00:29:56.422 Removing: /var/run/dpdk/spdk_pid58923 00:29:56.422 Removing: /var/run/dpdk/spdk_pid59016 00:29:56.422 Removing: /var/run/dpdk/spdk_pid59061 00:29:56.422 Removing: /var/run/dpdk/spdk_pid59169 00:29:56.422 Removing: /var/run/dpdk/spdk_pid59195 00:29:56.682 Removing: /var/run/dpdk/spdk_pid59340 00:29:56.682 Removing: /var/run/dpdk/spdk_pid59615 00:29:56.682 Removing: /var/run/dpdk/spdk_pid59800 00:29:56.682 Removing: /var/run/dpdk/spdk_pid59902 00:29:56.682 Removing: /var/run/dpdk/spdk_pid60002 00:29:56.682 Removing: /var/run/dpdk/spdk_pid60105 00:29:56.682 Removing: /var/run/dpdk/spdk_pid60138 00:29:56.682 Removing: /var/run/dpdk/spdk_pid60168 00:29:56.682 Removing: /var/run/dpdk/spdk_pid60243 00:29:56.682 Removing: /var/run/dpdk/spdk_pid60377 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61011 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61075 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61144 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61172 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61262 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61290 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61376 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61408 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61462 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61492 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61543 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61573 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61733 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61763 00:29:56.682 Removing: /var/run/dpdk/spdk_pid61852 00:29:56.682 Removing: /var/run/dpdk/spdk_pid62351 00:29:56.682 Removing: /var/run/dpdk/spdk_pid62756 00:29:56.682 Removing: /var/run/dpdk/spdk_pid65273 00:29:56.682 Removing: /var/run/dpdk/spdk_pid65314 00:29:56.682 Removing: /var/run/dpdk/spdk_pid65681 00:29:56.682 Removing: /var/run/dpdk/spdk_pid65732 00:29:56.682 Removing: /var/run/dpdk/spdk_pid66149 00:29:56.682 Removing: /var/run/dpdk/spdk_pid66740 00:29:56.682 Removing: /var/run/dpdk/spdk_pid67189 00:29:56.682 Removing: /var/run/dpdk/spdk_pid68276 00:29:56.682 Removing: /var/run/dpdk/spdk_pid69372 00:29:56.682 Removing: /var/run/dpdk/spdk_pid69495 00:29:56.682 Removing: /var/run/dpdk/spdk_pid69557 00:29:56.682 Removing: /var/run/dpdk/spdk_pid71198 00:29:56.682 Removing: /var/run/dpdk/spdk_pid71556 00:29:56.682 Removing: /var/run/dpdk/spdk_pid75535 00:29:56.682 Removing: /var/run/dpdk/spdk_pid75970 00:29:56.682 Removing: /var/run/dpdk/spdk_pid76618 00:29:56.682 Removing: /var/run/dpdk/spdk_pid77159 00:29:56.682 Removing: /var/run/dpdk/spdk_pid82815 00:29:56.682 Removing: /var/run/dpdk/spdk_pid83320 00:29:56.682 Removing: /var/run/dpdk/spdk_pid83430 00:29:56.682 Removing: /var/run/dpdk/spdk_pid83590 00:29:56.682 Removing: /var/run/dpdk/spdk_pid83648 00:29:56.682 Removing: /var/run/dpdk/spdk_pid83700 00:29:56.682 Removing: /var/run/dpdk/spdk_pid83755 00:29:56.682 Removing: /var/run/dpdk/spdk_pid83938 00:29:56.682 Removing: /var/run/dpdk/spdk_pid84100 00:29:56.682 Removing: /var/run/dpdk/spdk_pid84392 00:29:56.942 Removing: /var/run/dpdk/spdk_pid84522 00:29:56.942 Removing: /var/run/dpdk/spdk_pid84785 00:29:56.942 Removing: /var/run/dpdk/spdk_pid84916 00:29:56.942 Removing: /var/run/dpdk/spdk_pid85051 00:29:56.942 Removing: /var/run/dpdk/spdk_pid85441 00:29:56.942 Removing: /var/run/dpdk/spdk_pid85905 00:29:56.942 Removing: /var/run/dpdk/spdk_pid85906 00:29:56.942 Removing: /var/run/dpdk/spdk_pid85907 00:29:56.942 Removing: /var/run/dpdk/spdk_pid86199 00:29:56.942 Removing: /var/run/dpdk/spdk_pid86482 00:29:56.942 Removing: /var/run/dpdk/spdk_pid86916 00:29:56.942 Removing: /var/run/dpdk/spdk_pid87283 00:29:56.942 Removing: /var/run/dpdk/spdk_pid87919 00:29:56.942 Removing: /var/run/dpdk/spdk_pid87921 00:29:56.942 Removing: /var/run/dpdk/spdk_pid88316 00:29:56.942 Removing: /var/run/dpdk/spdk_pid88336 00:29:56.942 Removing: /var/run/dpdk/spdk_pid88350 00:29:56.942 Removing: /var/run/dpdk/spdk_pid88375 00:29:56.942 Removing: /var/run/dpdk/spdk_pid88388 00:29:56.942 Removing: /var/run/dpdk/spdk_pid88804 00:29:56.942 Removing: /var/run/dpdk/spdk_pid88854 00:29:56.942 Removing: /var/run/dpdk/spdk_pid89252 00:29:56.942 Removing: /var/run/dpdk/spdk_pid89503 00:29:56.942 Removing: /var/run/dpdk/spdk_pid90049 00:29:56.942 Removing: /var/run/dpdk/spdk_pid90700 00:29:56.942 Removing: /var/run/dpdk/spdk_pid92096 00:29:56.942 Removing: /var/run/dpdk/spdk_pid92749 00:29:56.942 Removing: /var/run/dpdk/spdk_pid92757 00:29:56.942 Removing: /var/run/dpdk/spdk_pid94819 00:29:56.942 Removing: /var/run/dpdk/spdk_pid94909 00:29:56.942 Removing: /var/run/dpdk/spdk_pid94999 00:29:56.942 Removing: /var/run/dpdk/spdk_pid95090 00:29:56.942 Removing: /var/run/dpdk/spdk_pid95253 00:29:56.942 Removing: /var/run/dpdk/spdk_pid95342 00:29:56.942 Removing: /var/run/dpdk/spdk_pid95431 00:29:56.942 Removing: /var/run/dpdk/spdk_pid95517 00:29:56.942 Removing: /var/run/dpdk/spdk_pid95928 00:29:56.942 Removing: /var/run/dpdk/spdk_pid96715 00:29:56.942 Removing: /var/run/dpdk/spdk_pid98121 00:29:56.942 Removing: /var/run/dpdk/spdk_pid98322 00:29:56.942 Removing: /var/run/dpdk/spdk_pid98612 00:29:56.942 Removing: /var/run/dpdk/spdk_pid99165 00:29:56.942 Removing: /var/run/dpdk/spdk_pid99564 00:29:56.942 Clean 00:29:57.202 15:37:43 -- common/autotest_common.sh@1453 -- # return 0 00:29:57.202 15:37:43 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:57.202 15:37:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.202 15:37:43 -- common/autotest_common.sh@10 -- # set +x 00:29:57.202 15:37:43 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:57.202 15:37:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.202 15:37:43 -- common/autotest_common.sh@10 -- # set +x 00:29:57.202 15:37:43 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:57.202 15:37:43 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:57.202 15:37:43 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:57.202 15:37:43 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:57.202 15:37:43 -- spdk/autotest.sh@398 -- # hostname 00:29:57.203 15:37:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:57.462 geninfo: WARNING: invalid characters removed from testname! 00:30:24.021 15:38:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:24.953 15:38:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:27.483 15:38:13 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:29.386 15:38:15 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:31.294 15:38:17 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:33.919 15:38:19 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:35.828 15:38:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:35.828 15:38:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:35.828 15:38:22 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:35.828 15:38:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:35.828 15:38:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:35.828 15:38:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:35.828 + [[ -n 5214 ]] 00:30:35.828 + sudo kill 5214 00:30:35.838 [Pipeline] } 00:30:35.854 [Pipeline] // timeout 00:30:35.858 [Pipeline] } 00:30:35.872 [Pipeline] // stage 00:30:35.877 [Pipeline] } 00:30:35.890 [Pipeline] // catchError 00:30:35.899 [Pipeline] stage 00:30:35.901 [Pipeline] { (Stop VM) 00:30:35.913 [Pipeline] sh 00:30:36.195 + vagrant halt 00:30:38.730 ==> default: Halting domain... 00:30:45.314 [Pipeline] sh 00:30:45.598 + vagrant destroy -f 00:30:48.138 ==> default: Removing domain... 00:30:48.411 [Pipeline] sh 00:30:48.696 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:30:48.706 [Pipeline] } 00:30:48.721 [Pipeline] // stage 00:30:48.726 [Pipeline] } 00:30:48.740 [Pipeline] // dir 00:30:48.746 [Pipeline] } 00:30:48.760 [Pipeline] // wrap 00:30:48.766 [Pipeline] } 00:30:48.779 [Pipeline] // catchError 00:30:48.789 [Pipeline] stage 00:30:48.791 [Pipeline] { (Epilogue) 00:30:48.804 [Pipeline] sh 00:30:49.089 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:54.376 [Pipeline] catchError 00:30:54.377 [Pipeline] { 00:30:54.385 [Pipeline] sh 00:30:54.667 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:54.667 Artifacts sizes are good 00:30:54.676 [Pipeline] } 00:30:54.688 [Pipeline] // catchError 00:30:54.696 [Pipeline] archiveArtifacts 00:30:54.703 Archiving artifacts 00:30:54.842 [Pipeline] cleanWs 00:30:54.856 [WS-CLEANUP] Deleting project workspace... 00:30:54.856 [WS-CLEANUP] Deferred wipeout is used... 00:30:54.869 [WS-CLEANUP] done 00:30:54.871 [Pipeline] } 00:30:54.881 [Pipeline] // stage 00:30:54.884 [Pipeline] } 00:30:54.892 [Pipeline] // node 00:30:54.895 [Pipeline] End of Pipeline 00:30:54.920 Finished: SUCCESS